So, I think the IRC layer is pretty much complete at this point. I’ve added options to enable/disable auto-reconnect, added error reporting/handling, and added configurable message throttling/flood control (with defaults that match the suggestions in RFC2813).
I did come across one bug while testing the throttling, where messages were being sent out of order. This was a bug in the way I was inserting messages into the prioritized queue. Just one character made the difference - I was determining the index at which to insert by counting the number of messages with a priority greater than the message being sent. What I meant to do was count the number of messages with a priority greater than or equal to the message being sent. The result was that when you sent a message, it was put in the queue in front of any other message with the same priority, instead of behind them all. Adding one tiny little equals sign to the code fixed that one.
As far as throttling goes, it is accomplished with a rolling timer. The connection processor is signaled that there is data waiting in the send queue by an event that is waited on along with the incoming data. Before, it would just dump the whole queue to the server whenever this flag was set. With the flood control now, it will dump one message at a time, until the throttle has been hit. At that point, it makes note of when sending can be re-enabled, and leaves early, before sending everything in the queue. On the next pass, if sending shouldn’t be re-enabled yet, we block the send-queue-has-data flag, such that we will only respond to incoming data. When throttling is occurring, a wait timeout is observed, so we can loop back around and re-enable the send-queue-has-data flag. Once this is re-enabled, we will hit the flag (if data is waiting) and keep sending data and repeating this process until the queue is emptied out. When sending is enabled, we revert to an infinite timeout, waiting on both receiving and sending.
There are three configuration values for the throttling - enabled, burst limit, and time per message. You are allowed to a number of messages, up to the burst limit, before you begin being throttled. After then, you can send one message every (time per message). The default values are derived from the algorithm described in RFC2813, section 5.8 - burst limit of 5, and time per message of 2 seconds. So, unless overridden (or disabled), you will be allowed to send 5 messages without being throttled (assuming you haven’t sent anything for the 10 seconds preceding that). Once you begin being throttled, a message will make it out to the server roughly once every 2 seconds (give or take a few ms, of course, depending on the OS scheduler, timer accuracy, etc.). Actually, I lied a tiny bit, it doesn’t actually wait 2 seconds per message in this case - with a burst allowance of 5, 5*2 = 10 seconds, so it will wait until 10 seconds after the first message in the ‘flood’ was sent (then 12, then 14, etc).
Error reporting and handling was added as well. If any uncaught exceptions are thrown on the message processor or connection processor thread, they will be swallowed and reported via an event. One change I may make later is to put some limits on this, that is, if you receive a configurable number of exceptions within a given time frame, then you give up and stop trying to reconnect or process messages (rather than potentially hammering the server if you keep being disconnected for some reason). I could also implement back-off into this, whereas it will try to reconnect instantly at first, then slowly back-off and wait longer and longer for a reconnect, until it finally gives up.
At any rate, I think that about does it for this portion of the project. It did run all night last night and all day today without disconnecting or throwing any exceptions, and I’ve tested out the throttling algorithm. SSL connections work. Everything seems to work great, so it’s time to move on. Still need to decide what I’ll start next… CTCP/DCC support or the plugin system. Decisions, decisions.