Monday, August 25, 2008

Loaded Red Hat: A Thousand Emails a Day

How did we celebrate the success of our memory swap ballet last week? We loaded 1.7 million Red Hat emails. It's geeky, but so are the lists! We've now got the complete set archived.

The first Red Hat messages start back in May 1996. At that time then there were just a few hundred emails each month. The chatter has grown a lot since, with recent numbers topping 30,000 messages a month. That's 1,000 per day.


It's interesting to see the #1 most common file attachment is of type patch. That makes sense as these are mostly developer lists.

But can anyone explain why on a Linux list the #2 most common attachment is the Outlook-generated winmail.dat!? Is that a good sign or bad sign?

Tuesday, August 19, 2008

How to Shutdown All Your Machines Without Anyone Noticing

Last week we discovered we had to replace some bad memory chips in 2 of the 3 machines we use to run the MarkMail service. This blog post tells the story of how we managed to replace these memory chips without (almost) any of our visitors noticing.

Architecture

First, a word about our architecture. The three machines I'm talking about here all run MarkLogic Server. We have some other machines in the overall MarkMail system that do things like queue and handle incoming mail, but they're not directly involved in the web site experience. I'm talking here about the three MarkLogic machines that work together in a cluster and that you interact with when you hit http://markmail.org.

The MarkLogic machines have specialized roles. One machine (I like to picture it up front) listens for client connections. It's responsible for running XQuery script code, gathering answers from the other two machines, and formatting responses. The other two machines manage the stored content, about half on each. They support the front machine by actually executing queries and returning content.

I'll refer to the front machine as E1, which stands for evaluator #1. We don't have an E2 yet but we're planning for that when user load requires. The back-end machines are D1 and D2, which stands for data manager #1 and #2.

The bad memory was on E1 and D1.

We'll Fix E1 First

We decided to fix E1 first because it's easiest. We gathered the MarkMail team and started at 5pm. That's the time period with our lowest traffic. It's a little counter-intuitive but since we're a global site we're as busy at 2am (Pacific) as we are at 2pm. The time around 5pm Pacific still sees a lot of traffic, but relatively less. Why? We theorize it's because we get the most traffic during the visitor's local business hours, and the 5pm to 8pm Pacific time slot puts the local business hours out in the middle of the Pacific.

The E1 server is important because it catches all requests. Our plan was to place a new host, essentially E2, into the cluster and route all traffic through it instead of E1. There's no state held by the front-end machines, so this is an easy change. We borrowed a machine, added it to the MarkLogic cluster, told it to join the "group" that would make it act like E1, and has our reverse proxy start routing traffic to it instead. We did all this with the MarkLogic web-based administration. It was far too easy, frankly.

We immediately saw the E1 access logs go silent and we knew our patient was, in effect, on a heart-lung bypass machine. We told our sysadmin in the colo to proceed.

That's when he told us that on more careful inspection the memory problems were on D1 and D2. The E1 server was just fine. Hmm...

We decided to call the maneuver good practice for later and put things back like we found them.

OK, We'll Fix D1 First

Performing maintenance on a machine like D1 requires more consideration because it's managing content. If we were to just unplug it, the content on the site would appear to be reduced by half. It'd be like winding the clock back to April, with our home page saying we just passed the 10 million message mark.

All email messages go into MarkLogic data structures called "forests". (Get it? Forests are collections of trees, each document being an XML tree). Our D1 server manages forests MarkMail 1 and MarkMail 2, the oldest two. They're now effectively read-only because we're loading into higher numbered forests now on D2.

Turns out that's a highly convenient fact. It means we could back up the content from D1 and put it on our spare box, now acting like a D3. Then with a single transactional call to MarkLogic we could enable the two backup forests on D3 and disable the two original forests on D1. No one on the outside would see a difference. Zero downtime.

It worked great! It took a few hours to copy things because it's hundreds of gigs of messages, but like a chef on TV we knew what we were going to need for showtime and prepared things in advance.

With the new memory chip placed in D1 we did a transactional switch-back, put the two original forests back into service and had the spare box unused again, ready to help with D2.

We Need an Alternate Approach for D2

Had we planned in advance to work on D2 we probably would have followed the same "use a backup forest" approach we used to work on D1 because it allowed for zero downtime. It would have required pushing ingestion activities to another machine like D1 so the forests could settle down and be read-only, but that's done easily enough. We didn't do this, however, because we were too impatient to wait for the data to copy between machines. Instead we decided to leave the data in place and do a SAN mount switch.

We host all our forest content on a big SAN (a storage area network, basically a bunch of drives working together to act like a big fast disk). All the data managing machines (D1, D2, and the spare acting as D3) have access to it. Usually we partition things into individual mount points so they can't step on each other's toes and corrupt things. You never want two databases to operate against the same data! Here we decided to remove the isolation. We'd have D2 "detach" the MarkMail 3 and MarkMail 4 forests and have our spare machine (acting like D3) quickly "attach" them. We would essentially transfer a few hundred gigs in seconds.

This system change couldn't be made transactionally, so we had a decision to make: Is it better to turn off the MarkMail site for a short time or let the world see a MarkMail with only half its content? We decided to just turn off the site. Our total downtime for the switch was 43 seconds going over, just over a minute coming back after the memory change.

We think we could do it faster next time with some optimizations in the MarkLogic configuration -- turning off things like index compatibility checks, which we know we don't need. Maybe 20 seconds, or even 15.

The Moral

Looking back, we're happy that we could cycle through disabling every machine in our MarkLogic cluster yet not have any substantial downtime. Looking forward, we expect operations like this will get easier. If and when we add a permanent E2 machine to the cluster it means we won't have to do anything special to take one of them out of commission. Our load balancer will just automatically route around any unresponsive front-end servers. We were also happy to see that our configuration for SAN-based manual failover works. We proved that as long as another machine can access the SAN, we'll be able to bring the content back online should a back-end machine fail.

Everyone on the MarkMail team works at Mark Logic, the company that makes the core technology that powers our site. In fact, in years past some of us have been directly involved in building the technology. But despite our familiarity, we were still delighted to take the production MarkLogic cluster out for a walk and get it to do tricks. It did the right thing time after time with every disconnect and reconnect and reconfiguration, and we couldn't help but feel a point of pride. This is some fun software! If you're a Mark Logic customer, we trust you know what we mean.

A non-techie friend once asked why managing a high-uptime web site was hard. I said, "It's like we're driving from California to New York and we're not allowed to stop the car. We have to fill the gas tank, change the tires, wash the windows, and tune the engine but never reduce our speed. And really, because we're trying to add new features and load new content as we go, we need to leave California driving a Mini Cooper S and arrive in New York with a Mercedes ML320."

So far so good! Here's to the long roads ahead...

Sunday, August 17, 2008

Pillow Talk

We have a bit of a tradition at MarkMail where we give away T-shirts at the conferences we attend. Printed on the front of the T-shirts we put the mailing list traffic chart generated by the community whose conference we're attending. Last year we did it at ApacheCon in November, then again at XML 2007 in December. We did it at JavaOne too. They're fun because they're personalized, and to the recipients the long bars often bring back memories of fast growth, new product releases, and raging flamewars.

One of the recipients of a T-shirt at XML 2007 was B Tommie Usdin. Tommie doesn't like to wear T-shirts. No, she likes to make T-pillows out of them instead. Recently she emailed us a picture of her handiwork:


We just had to share.

Saturday, August 16, 2008

MarkMail has Procmail

This last week we loaded the Procmail list archives into MarkMail and I wanted to pause and mention it here because it's the kind of thing that readers of our blog would probably appreciate.

Procmail, for those who don't know, is a tool for filtering email. It lets you define complex rules for email processing. You can file messages into folders, quarantine spam, block viruses, and more. First released back in 1990, it's an oldie but goodie for people who want to do advanced things with email and aren't afraid to do some rule file hacking.

Of course not all is great with Procmail. It's arcane and fickle, with a rule syntax that confuses new users. It hasn't had a new release in a long while, nor have its official docs been kept up to date. Answers to common questions aren't on the web site. As a result, every time I've wanted to do something non-trivial with Procmail, I've had to spend a fair amount of time Googling for answers and hunting for samples.

I think that can change. With this list load MarkMail lets you search 25,000 emails spanning the last 8 years where people have been doing Q&A for each other. I expect those emails will give me some good A's for my Q's. Hopefully they'll do the same for you.

P.S. Of course there's almost as many emails about Procmail outside the Procmail list as there are emails inside it.

Thursday, August 7, 2008

Interview with KDE

Earlier this week KDE News published an interview with us about our recent loading of the KDE community list archives.

Our interviewer, Jos Poortvliet, asked some interesting questions on topics we haven't spoken much about before: how we select which lists to load, and what technical challenges you hit in gathering and loading 2.7 million emails. If you're curious about how things work at MarkMail on the loading side, check it out.

Wednesday, August 6, 2008

Blogger Names us a "Blog of Note"

Earlier today we received a comment on one of our blog posts that said, "Congratulations on being named a Blog of Note this week!". It seemed like a perfect comment-spam ploy: Say something nice so the blog owner won't delete your comment. Yet something about the post smelled non-fishy. The comment didn't have any sketchy links like most spams do. I thought maybe it was real. To my surprise and happiness, it was!

We were listed by the Blogger Team as a "Blog of Note" for August 4th:


Thanks, Blogger!