Blog Archives

Blog entries submitted to

Git mirror of mozilla-central

 A while ago I spent some time in order to generate the perfect git mirror of mozilla-central, and it’s now up on github.  Here’s the story behind the repository.  If you’re not interested in history lessons, scroll down to the "What does this mean for me" section.


Jeff spent quite some time in order to convince me that git is superior to mercurial.  He was right.  And I’m glad I listened to him.  So I decided that I want to use git for most of my Mozilla development.  Some time before that, Chris Double had gone through the trouble of creating a git mirror of mozilla-central using hg-git, so I started to use that repository.  All was well until one day Jeff taught me about grafting two git repositories.  What grafting means is to replace the parent of a commit in one repository to point to a commit in another local repository.  Jeff had created a git mirror of the old Mozilla CVS repository.  The curious reader may realize what this meant: you could graft the git mirror of mozilla-central against the old CVS mirror, and you would get yourself a repository containing all of Mozilla’s history.  That’s right!  No more cross-blaming stuff on hg and bonsai.  You would just do git log or git blame in the grafted repository, and things would work as if we had never abandoned multiple years of the project’s history when we migrated from CVS to mercurial.  Beautiful!

Now, grafting two repositories has some problems.  The graft support has been added as an after-thought, which means that you cannot publish the grafted repositories so that others can consume them, and you might occasionally find that some git commands do not properly handle grafted repositories.  So, I took it upon myself of sharing the joy of the full history with everyone else in the project.  That, was easier said than done!

We discovered that git’s history rewriting tool, the filter-branch command, doesn’t really know about grafts, which has this exciting side-effect that if you issue a filter-branch command in your grafted repository starting at the parent of the graft point, filter-branch will create a full alternate history of your repository, with different commit SHA1’s (since the parent SHA1 has changed), which is a real non-grafted git repository.  So I took Chris’ and Jeff’s repository, grafted them together, and started running filter-branch to convert the grafted repository into a regular repository.  After about a day or so (yes, git filter-branch is that slow), I had a nice error message complaining that I have a commit with an invalid author line.  What the heck, the reader might ask?  It turns out that mercurial is a hipster when it comes to author lines for commits, and git is bipolar.

Mercurial treats the author information for a given changeset as basically a free-form text field.  You can put anything you want in there, and Mercurial would store it for you, and display it as is.  What you see is what you put into it (although, not necessarily what you intended.)  Git, however, has a stricter notion of what an author line could be.  To put it roughly, git expects the author information to be in the form of "Name <>" (yes, it won’t even allow multiple people take credit for a commit!).  And the author lines that hg-git produces from mercurial changesets were sort of sanitized to conform to that format, but not quite.  And weird things that we have in our mercurial history such as this changeset from Ms2ger confused hg-git.  At this point, it was very easy to blame hg-git, or at least Ms2ger, but being the responsible person that I am(!), I decided to delve a little bit deeper into this.  Having looked into git’s source code, it turns out that most of its high-level tools enforce this author line format, but some of its internal tools don’t, and readers who know anything about git’s source code know that looking for anything remotely similar to consistency in its code is like looking for a needle in a pile of haystack where you know there’s no needle to be found.  Hence the bipolarity diagnosis for git.  Now, it was time to get practical and address the problem somehow.

So I decided to fix hg-git, because, "what could possibly go wrong?".  The fix itself was fairly easy, even for somebody like me who only pretends to know Python (and really just looks up all of the language constructs and library functions on Google and just types them in his text editor.)  And I did that, and I tested my fix, and it avoided the Ms2ger problem!  So I went ahead and attempted to convert mozilla-central’s mercurial repository using my patched hg-git.  Little did I know that hg-git is the slowest piece of software ever written!  After 3-4 days, it finally finished converting the seventy something thousand changesets in the source Mercurial repository.  And after a day of running git filter-branch (remember what the workflow looks like?), I came in to the office one morning to find out that filter-branch has died on another commit, further down the history line, again, because of a bad author line.

To keep this blog post short enough so that you can actually download it on a fast connection, I had to do this whole round a few more times, each time fixing more problems that the hg-git authors did not anticipate.  With a turn-around time of about a business week for every single time, you can probably guess why I grin why people complain these days about waiting for 4-5 hours for their try server results.

Finally I had fixed all of the hg-git bugs that the mozilla-central history helped me catch.  And being a good open source citizen and all of that, I upstreamed my hg-git patches (well, really here‘s where I upstreamed them, since I was confused on the patch submission process for hg-git!).

So, I finally had a full git mirror of mozilla-central containing all of Mozilla’s history.  This was maybe a couple of months after I started this project (which I was working on in my free time!), and I had shed enough blood and tears and I thought that it’s useful enough for people that I sneaked it in under mozilla’s github account.

Then, I decided that a git mirror that does not update based on the main repository is not worth much.  So I spent a little time to show off my lack of shell scripting skills to create a cron script which would update the git mirror based on the stuff pushed to mozilla-central.  A few months later somebody (sorry, don’t remember who… maybe jlebar?) pinged me and asked me whether my mirror has an inbound branch.  I said no, and I wanted to explain why I don’t really have time to add that, but I realized that it would take me less time to modify the mozilla-central update script to also include mozilla-inbound, so I sat down and did that, and now I had a branch tracking mozilla-inbound!

I didn’t really talk a lot about the existence of the repository so much to people, mostly because I wanted to write this blog post first (and it took me only about a year to do that) until some time ago when Andreas Gal told me that the b2g project is based on my repository, and there’s apparently tons of people who are using this repository for their day to day development.  This was both nice to hear and frightening at the same time (scroll down to the Fun Facts section to know why!), and this motivated me to finally sit down and write this blog post (about a couple of months after talking to Andreas… I know, I know!).

What does this mean for me?

If you’re a Mozilla developer who’s fed up^H^H^H^H^H^H prefers to use git as opposed to mercurial, just clone the git mirror and start using git.  There’s even an inbound branch in that repository if you really want to live on the bleeding edge.

If you’re a Mozilla developer who has been using Chris’ git mirror, you should switch to this mirror instead, since Chris has stopped updating his mirror.  The update should be fairly painless if you pull my mirror’s master branch and rebase your local branches on top of it.  Once you have rebased all of your branches, git gc will kick in at some point and clean out the old history that you’re no longer using.

If you’re interested in having a repository with the full history of the Mozilla project, including the CVS history, either clone the git mirror and run git log and git blame locally, or use the github UI for blames (here’s a demo).  But be warned that github is sort of slow for large projects, so you will be much better off with a local clone and running git blame (or fugitive.vim, if you’re a vim user.)

If you’re interested in following my steps to do your own conversion, I have good news for you.  I have documented the detailed steps for this conversion from the bare CVS and mercurial repositories to the final git repository.  That directory also includes all of the files and resources that you will need for the conversion.

If you’re interested in more goodies available for this git mirror, check out the latest git-mapfile, the latest git commit and the corresponding hg changeset (and the latest inbound git commit and the corrsponding mozilla-inbound hg changeset).  The mozilla-history-tools repository is being constantly updated as my update scripts pick up newer changesets from mozilla-central and mozilla-inbound to always point to the latest commits and git-mapfiles.

Fun Facts

The update scripts are running on my Linux desktop machine at the office.  The mozilla-central update script runs every 30 minutes, which is much slower than the mozilla-inbound update script which runs every 10 minutes.  The box is connected to a UPS to make sure that we have a negligible reliability for power interruptions.  I do very little monitoring on the update scripts to make sure that they continue to run smoothly.  That monitoring includes glancing over the emails that cron sends me from the stdout output of the update scripts, and fixing up the problems as they come up.  As I have fixed more and more problems, the updates are running fairly smoothly and without any major problems for the past few months.  I did the original work to get the repository in my free time, and I did it because I thought it was useful and I personally wanted better tools for my day-to-day job.  I am glad that others have found it useful.

Posted in Blog Tagged with: , ,

Firefox 15: updates are now more silent

Firefox 15 is released on August 28th.  Among many new features implemented in this release is background updates.  This feature allows Firefox to download the update in the background, apply it alongside with the existing installation, and keep the updated version around so that it can quickly switch to it the next time that the browser starts up.  This effectively eliminates the update progress dialog that appears when you start Firefox after it has downloaded an update:

I previously wrote about this project.  You can see that post for more technical details.  This feature landed a while ago on the Nightly channel, and we soon discovered a few issues which we addressed in time for this to get uplifted and enabled on the Aurora channel.  Luckily no new issues were discovered with this feature as it rode the train to get on the Beta channel, and will get in the hands of all of Firefox users on Windows, Mac and Linux as part of the Firefox 15 release.

This was one of the scariest projects that I’ve ever worked on, since messing something up in the updater component could have catastrophic consequences in case it prevents users from being able to update to newer Firefox revisions.  I’m happy that the results of this project will soon get in the hands of millions of Firefox users, and I would like to thank Robert Strong, Brian Bondy, and the wonderful members of our Release Engineering (in particular, Ben Hearsum and Chris AtLee) and QA teams (in particular, Vlad Ghetiu) who helped me a lot along the way.  You guys rock, for being extremely helpful, and for making this large project possible!

Posted in Blog Tagged with: ,

Moving the Mozilla code base over to more modern C++

 The Mozilla code base is very old.  It dates back to before many of the common features of C++ today existed.  As a result, people need to get used to a lot of arcane patterns and idioms that don’t really feel great for those used to more modern code bases.  This has been especially difficult for new contributors as it increases the barrier to entry for them by adding to the list of new things that they need to learn before they can be effective working on the Mozilla code.

This pattern bothered a lot of people, and Michael Wu fixed the first notable example of such arcane pattern: converting the usage of the type PRBool to bool.  PRBool is a typedef to int which was used all around our code base instead of the C++ bool type.  I then followed the train by converting instances of PR_TRUE and PR_FALSE in our code base to true and false.  Aryeh Gregor then stepped up to convert our null value, nsnull, into the C++11 nullptr.  And I recently switched our usages of NSPR numeric types (such as PRInt16/PRUint32) to the stdint.h numeric types (int16_t, uint32_t).

This has been a long and hard road, and we should continue to do more of this in the future.  While this type of work may not seem that interesting as it’s not improving any of the user/web developer facing features, I believe it’s very important to the continuous growth of the Mozilla project, as it makes the code more modern and pleasant to read, and it makes the live of people contributing to the Mozilla code base for the first time much easier.

If you have ideas on more of this kind of cleanup, or are interested in helping with this effort, please contact me!

Posted in Blog Tagged with: ,

Resizing windows in Ubuntu

If you’re an Ubuntu user, you’ve probably come across problems when resizing Windows in the recent versions of Ubuntu. Jeff made me excited today by showing me one way to fix this problem. I looked around a bit on the web, and I found an even better way.

To modify Ambiance to have a wider margin, open /usr/share/themes/Ambiance/metacity-1/metacity-theme-1.xml and increase the values of the following properties:

Hopefully you’ll find this useful!

Posted in Blog Tagged with:

How I lost access to my Google account today

After I woke up this morning, I saw a weird login prompt on my phone asking me to log in.  I tried entering my password a couple of times but it didn’t work.  I then turned on my laptop and saw that I’ve been logged out of Gmail.  After I tried logging in, this is what I saw:

Account has been disabled

"Account has been disabled."  I’m sorry, what?!  Yes, indeed, Google has disabled my account for some reason.

I tried looking around the web for solutions, and found out that there are lots of other people who have faced this same problem.  In some cases, the situation had been resolved in a few days, but in some cases people’s accounts were never recovered.  I tried contacting somebody at Google support ("Surely they should have a support department, right?" Nope, wrong!), but the only thing I could find which did not require one to be logged in to Google was a simple form which took an alternate email address from me (which Google already had), and didn’t even tell me that I will be contacted about this.  That was it.

It was around that time which I started to stress out.  I don’t use a lot of the Google services (thankfully), but the two things which I relied on were Gmail and Google Docs.  I have been a Gmail user probably since 2004, and I have tens of thousands of work-related and personal emails stored in my account, some of which being extremely important to me.  I also used Google Docs to store a bunch of very important documents which I won’t be able to recover by other means.  Fortunately I don’t use other services such as Blogger, Picasa, Google Talk or Google+, so other parts of my online life such my ability to speak my mind freely on my blog, share photos with friends, talk to them or otherwise interact with them have not been affected by this.  There are also other relatively minor nuiances happenening as a result of this (I won’t be able to use the Market to install or update applications on my Android phone, and my application purchases are in an unknwon state at this point), but given the other problems I am dealing with right now, these seem pretty minor.

Now I understand that these Google services are free, but I’ve been paying for the Gmail+Docs shared storage, but apparently that does not help me to get customer support, have any rights over the data I have stored on the Google storage for which I am paying, or at least get notified on the reason why my account has been disabled.

Now time to get to the gist of what I want to say in this post.  We’ve all (yours truly included) heard about the importance of owning your digital data, the downsides of vendor lock-in, and how if you’re being provided a free service, you’re the product, not the customer.  But I honestly never understood how deep this problem is, and how severe the consequences can be ("surely this cannot happen to me", right?!).  But starting today, I look at this problem from an entirely new angle.  The issue of user sovereignty for our data was always close to my heart, but this time it’s personal.

Posted in Blog Tagged with: ,

Porting an OpenGL application to the web

Emscripten is a tool which compiles C/C++ applications to Javascript, which can then be run inside a web page in a browser.  I have started to work on adding an OpenGL translation layer which is based on WebGL.  The goal of this project is to make it possible to compile OpenGL C/C++ applications into Javascript which uses WebGL to draw the 3D scenes.

My first demo is a port of the es2gears application to the web.  es2gears is an OpenGL ES 2.0 port of the well-known glxgears application.  You can see the web port of es2gears in action if you’re using a WebGL enabled browser (Firefox, Chrome or Opera Next).  For some extra fun, press the arrow keys as the gears are animating!

Screenshot of the es2gears application

This port has been automatically generated from this version of es2gears.  If you want to play with this locally, you can fork the emscripten repository.

A note about the demo: this is not supposed to be a performance benchmark of any kind.  My GLUT implementation uses the requestAnimationFrame API if available which means that your rendering speed should be capped at about 60FPS.  And that is what you would get if you compile es2gears directly into a native application as well.  But this application doesn’t push either the CPU or GPU to their limits, so it is only useful as a proof-of-concept, and should not be used in order to compare the graphics/Javascript performance of browsers!

I’m very excited about this project, and this is only the beginning.  If you’re interested in this work, watch my blog for further posts about future demos!

Posted in Blog Tagged with: , ,

Updating Firefox in the Background

The dialog below should look familiar. It displays while Firefox completes the update process after a new version is installed and the browser is restarted.

Firefox Update Dialog

In order to update itself, Firefox first starts to download an update in the background. When the update is downloaded, Firefox stages it in a directory ready to be applied. The next time that Firefox is about to start up, it checks out the staging directory. If an update ready to be applied is found, Firefox launches the updater program, and applies the update on top of the existing installation (showing that progress bar as it’s doing its job). When the update process is finished, the updater program restarts Firefox. All of this happens as you’re waiting for your browser to start up in order to do what you wanted to do. This is clearly less than ideal.

For the past few weeks, I have been working on a project to improve this process. The goal of my project is to minimize the amount of time it takes for Firefox to launch after downloading an update. The technical details of how I’m fixing this problem can be found this document. Here’s a short version of how the fix works. When Firefox finishes downloading an update, it launches the updater application in the background without displaying any UI, and applies the update in a new directory that is completely separate from the existing installation directory. Instead of staging the update itself, an entire updated version of Firefox is staged. The next time that Firefox starts up, the existing installation is swapped with the new updated installation which is ready to be used. In this scenario, you likely won’t notice that Firefox has applied an update as no UI is shown.

Now, the reason that this approach fixes the problem is that swapping the directories, unlike the actual process of applying the update, is really fast. We are effectively moving the cost of applying the update to right after the update has been downloaded while the browser is running. This leaves only the really fast copy operation to be performed the next time that the browser starts up.

I have some experimental builds with this feature ready in a temporary channel called Ash. The implementation is now at a stage where it can benefit testing from the community. You can download the latest builds here. I will trigger a few nightly builds on this branch every day so that you would get updates if you’re running an Ash build.

In order to help with testing this new update process, all you need to do is to download the latest build from Ash, then wait a few hours so that a new nightly build becomes available, and then update to that build. Updating can be triggered manually by opening the About dialog, or by the background update checker if you leave the build running for a few hours. If everything works correctly, when you restart Firefox, you should get a new build without seeing any progress bar as Firefox is starting up. In order to verify that you have indeed been updated to a new build, you can go to about:buildconfig, copy its contents, and then compare it with the contents of about:buildconfig when Firefox starts up after an update.

It would be extremely useful if you can test this with different types of security and anti-virus software running. If you observe any problems or warning, or if you see that the update did not change the contents of about:buildconfig, then please let me know so that I can try to fix those problems.

For people who are curious to see the code, I’m doing my development on this branch, and I’m regularly posting patches on bug 307181.

Please note that this is still in the testing stage, and at this point, we’re not quite sure which version of Firefox this will land in (we’re working to land it as soon as is safely possible). No matter which version of Firefox includes this feature for the first time, we believe that this will be a very positive change in making the Firefox update experience more streamlined for all of our users.

Posted in Blog Tagged with: , ,

Why you should switch to clang today, and how

Clang is a new C/C++/Objective-C/Objectice-C++ compiler being developed on top of LLVM.  Clang is open-source, and its development is being sponsored by Apple.  I’m writing this post to try to convince you that you should switch to using it by default for your local development if you’re targetting Mac or Linux at least.

Clang tries to act as a drop-in replacement for gcc, by trying to immitate its command line argument syntax and semantics, which means that in most cases you can switch from gcc to clang by just changing the name of the compiler you’re using.  This means that switching to clang is going to be really easy, but it also provides at least two useful features which make it really better than gcc for local development:

  • Compilation speed.  Clang is usually a lot faster to compile than gcc is.  It’s been quite a while since I did measurements, but I’ve seen compile times up to twice as fast with clang compared to gcc.  Yes.  You read right.  That’s twice!.
  • Better compiler diagnostics.  Clang usually provides much better diagnostics in case your code fails to compile, which means that you need to spend less time trying to understand what you should do to fix your code.  It even goes further by suggesting you of the most likely fixes.  I’ll give you two examples!

Consider the following program:

void foobar();

int main() {

Here is the output of clang on this program.

test.cpp:4:3: error: use of undeclared identifier ‘foobaz'; did you mean ‘foobar’?
test.cpp:1:6: note: ‘foobar’ declared here
void foobar();
1 error generated.

Here’s another program, followed by clang’s output:

#define MIN(a,b) (((a) < (b)) ? (a) : (b))
struct X {}

int main() {
  int x = MIN(2,X());

test.cpp:2:12: error: expected ‘;’ after struct
struct X {}
test.cpp:5:11: error: invalid operands to binary expression (‘int’ and ‘X’)
int x = MIN(2,X());
test.cpp:1:24: note: instantiated from:
#define MIN(a,b) (((a) < (b)) ? (a) : (b))
~~~ ^ ~~~
2 errors generated.

Now if that has not made you drool yet, you can check out this page for more reasons why clang provides better diagnostics than gcc does.

For the impatient, here is how you would build and install clang locally on Mac and Linux.  You can check out this page for more comprehensive documentation.  Note that the clang port is not ready for everyday use yet, so I won’t recommend you switching to clang if you’re on Windows.

mkdir /path/to/clang-build
cd /path/to/clang-build
svn co llvm
cd llvm/tools
svn co clang
cd ../..
mkdir build
cd build ../llvm/configure --enable-optimized --disable-assertions
make && sudo make install

At this point, clang should be installed to /usr/local. In order to use it, you should add the following two lines to your mozconfig file:

export CC=clang
export CXX=clang++

Note that I’m assuming that /usr/local/bin is in your $PATH.

I’ve been using clang locally with very little problem for the past few months.  There has been a lot of effort to make Firefox successfully build with clang (mostly due to the heroic work done by Rafael), and he is now working hard to get us clang builders so that we would know when somebody lands code which break clang builds.  But you can switch to clang locally today and start benefiting from it right now.

Switching to clang will soon enable you to do one more cool thing.  But I won’t talk about that today; that’s the topic of another blog post!

Posted in Blog Tagged with: ,

Upcoming changes to absolute positioning in tables and table margin collapsing in Firefox 10

Last week I landed a number of patches which I’ve been working on which fix two very old (5 digit) bugs in Gecko (bug 10209 and bug 87277) which affect rendering of web content.  This point summarizes the changes to the behavior of Firefox as a result of those patches.

The first behavior change is about absolute positioning of elements inside positioned tables.  When you specify the CSS position: absolute style on an element in a web page, it is taken out of the flow of the web page, and its position is calculated relative to the nearest positioned ancestor in the DOM (by positioned, we mean an element which has a non-static position, i.e., one of fixed, relative or absolute for its position computed style.  See this test case for a simple example.

For a long time, Gecko used to only look for inline or block positioned elements in the parent chain.  So, for example, if you had a positioned table element inside a positioned block element, any position: absolute element inside the table used to be positioned relative to the outer block element, as opposed to the table element (which is the nearest positioned ancestor).  This test case shows the bug if you load it in Firefox 7.  In the correct rendering, the div with the red border should be placed ten pixels below the element with the green border, but in Firefox 7, it is positioned 10 pixels below the element with the red border.

The other behavior change is a fix to margin collapsing on table elements.  Firefox used to have a bug which caused margins on table elements to not be collapsed with other adjacent elements.  Take a look at this test case for example.  Here we have an outer table with a height of 180 pixels, in which there are 4 div elements, each with the height of 20 pixels, and with 20 pixels margins on each side.  The correct rendering for this test case is for the 4 inner divs to be laid out evenly spaced in the vertical direction in the outer div.  This happens because the bottom margin of the first inner div is collapsed with the top margin of the second div, which makes the content of the second inner div to be laid out 20 pixels below the content of the first inner div.

Now, if you make the inner divs tables instead (as in this test case), you can see the bug in Firefox 7.  What’s happening is that the vertical margins between tables are not collapsed, which effectively means that the contents of the each of the tables will be 40 pixels below the previous one, making the 4 tables overflow the height of their container.  If you try this test case in Firefox trunk, you will see that the rendering is identical to the rendering of the test case using inner divs instead of inner tables.

Note that this fix is not specific to the case of adjacent tables.  This test case interleaves divs and tables.  The rendering should be identical to the previous two test cases in Firefox trunk now.

It should be obvious that since these two changes affect the rendering of web pages, they may break existing web sites.  Indeed, today we had our first bug report about this behavior change.  The good news is that these two changes make us more compliant to the CSS specification, and all other web browser engines implement these cases correctly, so web sites which are affected by these two changes have been relying on a bug in Gecko, and have probably been already broken in other web browsers for a long time.  These fixes bring Gecko on par with other browser engines.

You can test this fix in Firefox Nightly right now.  If you see this change affecting a website you own, please let me know if you need help in fixing it.  If you see this change affecting websites you do not own, please let me know and I’ll try to contact the website and help them fix the problem.  If you see a behavior change which you think is not intentional and is a regression from these changes, please file a bug and CC me on it.

Posted in Blog Tagged with: ,

Submiting my first patch to Chromium

A couple of weeks ago, I submitted my first patch to the Chromium project.  I was always curious to know what their patch submission process looks like to a newcomer, mainly in order to see if we can apply some of their ideas to Mozilla.  Here’s the story of what happened.

It all started when I tried to fix bug 98160 (which also happens to be the first five digit bug that I’ve fixed, with the second one on the horizon now — stay tuned!).  When fixing that bug, I got curious to see how Chromium is handling that issue, so I decided to go and read their code.  This turned out to be a good decision, because I found out that they had gone through some iterations in order to finally simulate what Windows does natively, so I decided that I should borrow some of their code and ideas.

While reviewing my patch, roc spotted a spelling mistake in a comment in the code that I had borrowed verbatim from Chromium.  So I decided that it would be a good opportunity for me to submit a patch to the Chromium project to fix this spelling mistake.

I went to, and quickly found this link. [1]  Then, I modified my local Chromium git clone, and decided that I should ignore everything under the "Get your code ready" section, because, well, I was submitting a spelling fix to a comment!  How hard could that be?  It turns out that I had made two mistakes.

Then, I filed a bug in Chromium about the problem (I wasn’t really sure if that was needed or not, but what I saw later on when using git-cl lead me to believe that was indeed required.)  I then used git-cl to upload my local git branch as a patch for code review.

This was perhaps the best part of the entire process. [2]  I just typed git-cl upload spelling-fix (spelling-fix being my local git branch name) and a console based wizard started.  The first thing that it prompted me for was that I need to add myself to the AUTHORS file because this was my first patch.  "What?!", you may ask?  Compulsory credits?!  Well, turns out that was my first mistake not reading the "Get your code ready" section.  I asked on their IRC channel, and shortly after someone replied: "yes, you need to do that".  "But this is only a spelling fix to a comment, surely I don’t deserve to be credited for that?," I asked.  "No, you should add yourself to the AUTHORS file," they replied.  I gave up, added myself to the AUTHORS file, committed and re-ran git-cl again.  This time, git-cl uploaded my patch, causing this code review request to be submitted.  But this time I noticed that git-cl asked me to enter a bug number associated with my patch, which makes me think that filing a bug was indeed necessary.

I waited for a short while, and the same person (sorry, I forget their handle to credit them with helping me) pinged me on IRC saying that something is wrong with my patch.  This kind of scared me, but after a short while, he said that somebody has landed a patch which caused mine to not apply cleanly any more.  After a while, we figured out that it’s a conflicting change to the AUTHORS file(!!!), so I waited for their git mirror to catch up with the change, pulled and rebased my branch, and then uploaded my patch using git-cl again.  After a short while, I got an email from the Chromium issue management system containing a review message saying "lgtm" (looks good to me).

Some time later in the process (I forget when exactly), the same person pinged me on IRC telling me that I need to sign their Individual Contributor License Agreement.  At Mozilla, we don’t require that upon submitting a first patch, but as I understand it (IANAL), Google requires individual contributors to grant copyright over their work to Google.  This made me feel a bit weird, but I didn’t care too much over that simple fix, so I went ahead and submitted the form online.  If I ever decide to contribute anything more significant to the Chromium project, I may revisit this decision.  I am still the copyright holder of all of the code I have submitted to the Mozilla project as a volunteer, and I think that is a right that organizations behind free software projects should not take away.

Then my patch was automatically submitted to Chromium’s try server.  I noticed that when I got an email like this:

Chromium try server success message

I waited a bit more, and I got 3 more emails like that, and one indicating a failure:

Chromium try server failure message

There was also an orange run further down the email that did not fit into this screenshot.  This suggested to me that Chromium is also suffering from tests failing intermittently.  In fact, after searching a bit I found out that they have a dashboard for their flaky tests (which could be the subject of another post!).  As indicated in the email, I replied to it explaining that my patch was a spelling fix in a comment, and there is no way for it to have caused a test failure.  A while after that I got a reply to that email from a human agreeing with my assessment of the situation.  Then, I got an email from a bot saying that my code was landed in their repository!

My impression with the entire process was positive.  Chromium definitely has a streamlined process for patch submission for new contributors.  It took a bit more than 3 hours since I filed the bug for my patch to be committed to their repository, which is impressive (that might have something to do with my patch being trivial, and me hanging out on their IRC channel, but still!).  I liked their git-cl tool very much, and I would definitely like for Mozilla to also have a command line tool which would file a bug on behalf of someone and attach a patch to it.  The only weird thing that I felt was that the process was too bot-centeric.  If I were not talking to people on their IRC channel, the only human interaction that I would have with the Chromium folks would be the review and the person replying to my email about the try server failure.  This might be a good or a bad thing, depending on what type of person you are, but I definitely enjoy getting a sense of more humans would be involved when I submit a patch to an open-source project!

My next large open source project to target is WebKit.  I’ll post about my exprience when I actually have time to write the WebKit patch that I have in mind.  :-)

  • [1]  We don’t do as good of a job as Chromium does here.  If you go to, you should click on the Getting Involved link, then wonder for a bit to see what you should do after that.  Hopefully you’ll figure out to click the Areas of Interest link which is a small link a the top right, then scroll down to Coding, then read the text for a bit and wonder which link you should click, finally click on Developers can help, and then find yourself in front of a huge page containing all sorts of links.  Hopefully after some pondering, you’ll click on Getting your patch into the tree which takes you to a page titled "How to submit a patch" (finally!).  Fortunately, the last page is a wiki, so I went ahead and changed the title of the aforementioned link to "How to submit a patch", which I think is a much better title.
  • [2] Figuring out how to run git-cl was a huge pain.  I had the Chromium depot tools installed in a weird way that caused it not to be in my $PATH, and that caused me to wonder for a few minutes how to get the "cl extension" to git, and some more minutes to figure out that I need to find the git-cl script somewhere down my depot installation directory.  I wish this tool lived in their tree somewhere.

Posted in Blog Tagged with: