Mail services on mail5.webfaction.com and webmail.webfaction.com are currently not working. We are looking into the problem and hope to have normal service restored soon. We will update this entry as we have more information.
2009-04-17 10:24 CDT – troubleshooting on mail5/webmail is still in progress.
2009-04-17 11:33 CDT – we’ve just pointed webmail.webfaction.com at a different server IP. Webmail users will be able to access webmail as soon as the DNS change propagates, but you will not have access to your usual webmail address book since it is located on mail5. If your mailbox resides on mail5, you still will not be able to access your mail. Troubleshooting on mail5 is still in progress.
2009-04-17 15:37 CDT – the problem on mail5 appears to be a failed OS upgrade. We are re-installing packages now.
2009-04-17 17:29 CDT – mail5 is back online and webmail.webfaction.com has been pointed back to mail5. All mail5 users should be able to access their mail now, but the server may be slow to respond for the next several hours as it catches up with today’s incoming mail.
E-mails sent through our e-mail platform are currently rejected by Hotmail. We are currently talking to the Hotmail team to get this resolved as soon as possible. We will update this post once it is fixed.
Update: We are still working with Hotmail and the ban should be lifted tomorrow at the latest.
Update (4.09am GMT): E-mails to Hotmail are now going through again.
Web 69 is currently down. Its root partition went read only and rebooting it revealed an issue which we are currently working on resolving now. We will post updates as they are available.
It appears to be an issue with the RAID controller and we are currently replacing the hardware and restoring all data from backup.
2009-03-09 06:00 PST: Web69 is still down having suffered a serious RAID controller failure. We have recovered all of the data from the server and are currently working on restoring it to a new standby server which will replace web69.
2009-03-09 06:16 PST: Web69 is now back online with all its data. We decided to move the data onto a new server to give us more time to check the hardware on the failing server. We copied all the data from just before the crash so no data has been lost.