Call a spade a spade, and a Nightly a Snapshot

This is a blog post to sum up some of my thoughts I shared on the Nebula-dev mailing list with Wim Jongmam and later on Twitter with Zoltán Ujhelyi and Dave Carver about naming build types at Eclipse.org and scheduling them.
It is a topic open to debate. My goal with it is to find out practices and names that are relevant and useful for community (contributors and, mainly, consumers) and also to give food for thoughts about how to deal with builds.

Background

Historically, Eclipse has 4 to 5 classical qualifier for binary artifacts:

  • Release
  • Maintenance
  • Stable
  • Integration
  • Nightly

This wording is specific to Eclipse, only Eclipse people do understand the meaning of it. Even worse, some of these are not used accurately. Although this is more or less official wording, I am not sure it is used by most projects.

This guy has driven a revolution in the way we build and deliver software

Now  Eclipse.org provides continuous integration, to automate and manage builds executions, and most of them happen whenever a change happen on your VCS. Continuous integration has highly changed the way binary are produced and made available to consumers, it is now much easier to get builds, lots of project have a ping of less that 10 minutes between a commit and a build ready to be released.

That was the starting point of my thoughts, with the Nebula build job: Why calling nightly a job that does run on a build any time a commit happen?

Requalifying binaries to make consumption clearer

As a producer of builds, here are my opinion on these qualifiers:

  • Release: The heartbeat of the product for consumers
  • Maintenance: is a release, but with 3rd qualifier digit different than 0
  • Stable: Is nothing but an integration build that was put on a temporary update-site for the release train. And to be honest, I don’t use it, I directly point the latest good continuous integration builds to the release train builder
  • Integration: The heartbeat of the product for developers, built on each commit
  • Nightly: What is in it? What does nighly mean at the same time for a developer in Ottawa and a developper in Beijing? Who cares that it is build nightly and not a 2pm CEST? For most projects, the nightly built artifacts are not at all different from the ones that would have been built with the last commit. What is the added value of artifacts built during a nighly scheduled build over the latest artifact built from last commit? Why building something every night if there was no commit in it for 3 monthes?

So I am in favor of removing Maintenance, Stable and Nightly. That gives

  • Release
  • Integration

Wow, 2 kinds of builds, reminds me a tool… Oh yes! Maven! Maven has 2 types of binaries: Release and SNAPSHOTS. That’s great, we finally arrive to the same conclusion as Maven, except that what Maven calls “snapshot” is called “integration” in the Eclipse teminology.

But now, let’s consider the pure wording: why having 2 names for the same thing? How to call a binary that is work in progress? An “Integration” or a “snapshot”?

Let’s be pragmatic: This report tells us there are about 9000000 Java developers in the world. This onetells us 53% of this population use Maven. Let’s say that about 60% of Maven users do understand the meaning of SNAPSHOT. That means 9000000 * 53% * 60% = 2,862,000 people do know what SNAPSHOT means.  Wayne confirmed me that the size of the Eclipse community is 132,284 people, who might know the meaning of “integration” in an Eclipse update-site name. That’s the number of people who have an account on eclipse.org sites – wiki, forum, bugzilla, marketplace. Even if we assume that 100% of them do understand the differences between the different qualifiers, and I am pretty sure that less than the 600 committers actually do, that makes that Snapshot is 21 times a more popular and understood word than integration.

So, Eclipse projects and update sites would be easier to understand and consume for the whole Java community by accepting the Maven wording, and using it on the websites and update-sites name.
Following the Maven naming and the release/snapshots dogmas would make consumption easier, but also avoid duplication of built artifacts, and also make things go more “continuously”. Your project is on rails, it’s going ahead with Snapshots, and sometimes make stops at a Release. That’s also a step ahead towards the present (see how GitHub works) of software delivery: continuous improvement, continuous delivery.

Requalifying build job to make production clearer

So now let’s speak about build management, no more about delivery.

You need several jobs to keep both quality and fast enough feedback, then set up several jobs!

Dave Carver reminded me about some basics of Continuous Integration: keep short feedback loops, one builds for each thing. That’s true. If you have a “short” build for acceptance and a “long” build for QA, you need to have separated jobs. Developers need acceptance feedback, they also need QA feedback, but they for sure cannot wait for a long build to have short-term feedback. Otherwise, they’ll drink too much coffee while waiting, it is bad for their health.


But do not create several builds until you have real needs (metrics can also help you to see whether you have need). If you have a 40 minutes full-build, that fails rarely, with slow commit activity, and that nobody is synchronously waiting for this build to go ahead, then multiplying builds and separating reports could be expensive for low Return On Investment. That’s the case for GMF-Tooling: we have a 37 minutes build with compile + javadoc + tests + coverage + signing, and we are currently quite happy with it, no need to spend more effort on it now. Let’s see how we feel when there will be a Sonar instance at Eclipse.org and that we’d have enabled static analysis… Maybe it will be time to split job.

Avoid scheduling build jobs, it makes you less agile

Before scheduling a job that could happen on each commit, just think about how long will be the feedback loop between a commit and the feedback you’ll get: it is the time that will happen between the commit and the clock event starting your job. It can be sooooo long! Why not starting it on commit and get results as soon as possible? Also why schedule and run builds when nothing may change between to schedule triggers?
I can only see one use case where scheduling job executions is relevant: when you have lots of build stuff in the pipe of your build server, and you have limited resources. Then, in that specific but common case, you want to set up priorities: you don’t want a long-running QA job to slow down your dev team who is waiting for feedbacks from a faster acceptance build. For this reason, I would use scheduling, but I would do so because I have no better idea on how to ensure the team will get the acceptance feedback when necessary, it is a matter of priority. Maybe inversting in more hardware would be useful. Then you could stop scheduling, and get all your builds giving feedbacks as soon as they can. Nobody would wait for a schedule event to be able to go ahead the project.
As I said to Zoltán on Twitter: “The best is to have all build reports as soon as possible, as often as possible” (and of course only when necessary, don’t build something that has already been built!). Scheduling goes often against those objectives, but it is sometimes helps to avoid some bottlenecks in the build queue, and then save time projects.

Name it to know what it provides!

Remember Into the Wild or Doctor Zhivago... "By its right name"...

Ok, at Eclipse, there are “Nightly” jobs. I don’t like this word. Once again, “Nightly” is meaningless in a worldwide community. And the most important issue are “What does this build provide?”, “Why is it nightly, can’t I get one on each commit?”.
If this build is run on each commit, then don’t call it “Nightly”, because it feeds the consumer with false information. You can think about having both “acceptance” and “QA” jobs, then put that in their names rather than a schedule info, that’s a far more relevant information.

Conclusion

Continuous integration has changed the way we produce and deliver software, we must benefit of it and adopt the good practices that come with it. Continuous integration is the first step towards what seems to be the present of near future of software delivery: continuous improvement and continuous delivery. We must not miss that step if we want to stay efficient.

11 Comments

A TreeMapper widget in Nebula

TLWR (Too Long Wont Read):

 

Very often, one has to define links or mapping between objects. Here is the way I learnt to represent the concept of matching items when I was 4.

A matching game

Now I’m 26 and I use the concept of mapping everyday at work. It can represent reference, transformation, association… But that’s now more complicated, because I now use structured data!

With the BPEL “Assign” element in mind, I wanted to find a widget that allowed me to map structured data, represented as trees, using the simple “Draw a line” method that I have been using for years. Except that “Draw a line” would be represented by “Drag’n’drop” on a computer. After some research, I could not find an open-source widget for that. Then I started to develop a new one. And here is what it looks like:

The TreeMapper in action

Although it is primarly intended to be used in Eclipse BPEL Designer, I made it part of Nebula because I hope it is helpful for lots of other projects in the Eclipse community. I can remember some projects I work(ed) on that have nice use-cases for this widget in the past (for Eclipse JWT and Scarbo, Bonita Studio, Petals Studio…); and I now think about how to simplify the GMFMap editor with it.

If you are interested in it and if you plan to attend EclipseCon 2012, then vote for this submission to learn more: http://www.eclipsecon.org/2012/sessions/%C2%A1new-nebula-treemapper-widget

PS: If you like it, then thank the French National Research Agency for funding it

8 Comments

GMF Tooling and its 2.4.0 “for Indigo” release are in Da Place

For those who did not follow GMF Tooling development in the last year, then let’s say that you missed a complex history. Since most of the contributors have changed, GMF Tooling had trouble to set up a new efficient leadership, had trouble to provide builds, did not succeed to get into the Indigo release train, and could not provide a release that is compliant with Indigo… That was a sad part of GMF Tooling history! But this is now over. Here are the recent accomplishment that make GMF Tooling back to active life:

GMF Tooling 2.4.0 “for Indigo” released

A lot of people were waiting for it, it finally occurs: GMF Tooling (finally) has a release that works with the Eclipse Indigo release! It is mainly made of bug fixes and compatibility improvements. Here is the p2 repository: http://download.eclipse.org/modeling/gmp/gmf-tooling/updates/releases/

A new lead: Michael “Borlander” Golubev

After lots of mails, the GMF Team, helped by the Modeling PMC, was able to nominate a new lead to overview the GMF Tooling contributors team and development. He is Michael Golubev, often known as “borlander” on bugs and forums. who works for Montages as a full-time developer for GMF Tooling. He has also been the lead of UML2 Tools.

Simple build process thanks to Tycho and host build on hudson.eclipse.org

GMF Tooling now has a Tycho builder, that is far easier to maintain and run than the legacy one. So contributors can now run tests very easily with a “mvn clean install” to ensure their work did not break anything. That makes contributing much easier. Moreover, the build is hosted on hudson.eclipse.org so that it is easy and transparent to get an idea of how healthy is the code. Also, going to continuous integration on Eclipse servers allows to produce builds that are equivalent to the one that will be released (including signing and all the necessary Eclipse stuff). So there is no more additional difficulty building a release than building a snapshot.

Get GMF Tooling back into the Modeling discovery service

The Modeling Discovery wizard is a wizard that appears when downloading the Eclipse Modeling package to suggest you some projects to install and use. GMF Tooling just get back into it as I am writing this post!

Guarantee GMF Tooling will make it in Juno release train

We also made the efforts to ensure the future of GMF Tooling will be less chaotic than it was for Indigo. We already did most of the necessary stuff to get GMF Tooling in the Juno release train. So, no stress this year! More details here.

Improved documentation

See this effort in my previous post. This is an undefinitely work in progress, so feel free to contribute directly by making the wiki easier to navigate.

So… what’s next?

A lot of things + what the community will think about contributing. The project plan for GMF-Tooling 3.0 (yes, 3.0!) is not yet finished, but here are some key objectives:

  • Easier and more intuitive Tooling – with a high-level graphical editor to define your own graphical editor (a “meta-“editor)
  • Improve integration/collaboration with other Modeling projects (EEF, XText…)
  • Move to Git
  • Enroll more contributors in GMF Tooling development
  • Simplify generator code
  • Extensibility of generator and tooling to make it easy to add support for new things in GMF Tooling from 3rd-party bundles.

I think GMF Tooling just achieved a major step, and I bet this is the beginning of a new, leaner, era for Graphical Modeling!

9 Comments

Back on SoftShake

SoftShake 2011 was now several weeks ago! This was a very nice conference, quite well organized, with different tracks that make easy for you to always find something interesting to learn.

Read the rest of this entry »

1 Comment

SWTBot tip of the day: Explicit your test depends on UI contributors!

As I am working on trying to build SWTBot with Tycho, I find out a mistake that is quite common with SWTBot and that makes test failing with Tycho whereas they work with some more “opaque” builders.

When you write a UI test, your test does depend on UI components you use. This dependency is specific to your test bundle, and then must be explicitly defined in your MANIFEST.MF. For example, if your test will click on the “New > Java Project” menu, so it highly depends on org.eclipse,jdt,ui, which provides this contribution. Then do not forget to add this in your dependencies!

A bit easier than "Where is Waldo"

It can work in some cases when you are sure your test platform already contains the contribtutor of the UI elements you manipulate (here org.eclipse.jdt.ui) . Then the menu is already there – as a 3rd party contribution-, although you did not add the dependency to it in your test. But that’s more or less a lucky case, or a case that requires rigorous management of your test platform.

With Tycho, your test platform is, by default, made of your test bundle and all its dependencies (computed from MANIFEST.MF). Then if you don’t explicit your dependencies to UI contributions, your test will probably run in a target platform which does not include the UI elements you interact with, and will fail. When you have this depedency to the UI contributor (such as org.eclipse.jdt.ui) in your test MANIFEST.MF, you are sure you’ll have the menu available whenever you execute your test. Moreover, you are sure that all installation of your test bundle with p2 will contain the necessary stuff to get it working.

To sum it up: If your test depends on UI elements, then it depends on plugins that contribute these UI elements. So tell it in its MANIFEST.MF. That’s all!

3 Comments

Speaking at SoftShake 2011

I have recently been really pleased to see that I was accepted as a speaker for the SoftShake conference, that takes place in Geneva on the 3rd and 4th of October. It will be a great opportunity to meet new people, to learn about new trendy technologies and methodologies, but also to teach to the audience the basic issues of Modeling, and how the Eclipse Modeling project is there to help them. The abstract of the 50 minutes session can be found here: http://soft-shake.ch/en/conference/sessions.html?key=modelingwitheclipse . If you plan to attend this presentation and if you have any topic you’d like to hear about, then feel free to ask, I’ll try to cover this specific subject during the presentation.

The "SoftShake speaker badge"

I am very excited by this presentation since I think Modeling is something very productive and powerful, and I know how helpful are the tools provided by Eclipse. It is the first time I have the opportunity to democratize Modeling, Model-Driven Stuff and Eclipse Modeling Project in a conference! However, this requires a lot of work to prepare a good presentation, but that’s a work I love to do.

On a site note, my PetalsLink colleague Jean-Christophe Reigner was also accepted as a speaker. He will democratize the usage of ESB: what use-cases it resolves, what is its role in your SI and so on. That should be also quite interesting since ESB are very powerful and scalable middlewares and most people don’t get the actual value yet. So if you are interested in SOA, System of Informations, Middleware, ESB and so on, you should for sure come to see his talk, you’ll definitely learn interesting things.

3 Comments

GMF Tooling is back in the train

Unfortuantely, GMF Tooling did not have enough resources to get in the latest Indigo release traine.

GMF-Tooling and Indigo...

However, this was just an exception: GMF Tooling is already back on the Juno train! In order to be more reactive to the release train requirement, GMF Tooling moved its build to Tycho, making the build system quite easy to maintain, and release train rythm much easier to follow.

Thanks to Tycho, GMF-Tooling is already in the train

And you can see it here:alrea

GMF Tooling in Juno

Then you won’t have to search a lot in order to get GMF Tooling in Juno.

8 Comments

Refreshing GMF documentation

Da Graphical Modeling Framework (GMF) has ever been an amazing, very productive and powerful project to generate diagram for models based on EMF. However, I recently read several things and forums or had some chatting with people that make me think GMF is difficult to use for newcomers. GMF is not that complicated to understand and use, but the feedbacks I got show that it is not easy to get it, it can be a blocking issue for some potential users. The question is “what can we do to make it easier for people to consume GMF?”.

The first answer I have is in refactoring documentation to show more “Getting started” tutorials. Then people need to spend less time to find a way to get started.

What you see on the main GMF wiki page

Also, I thought the tutorial was too monolithic. There are a lot of things you can do with GMF, but not all will be useful for your use-case. Users need to clearly understand what use case of piece of documentation actually resolves, in order either to skip it, or to spend more time on it. This can be achieved by reorganizing the titling of the documentation. Then it becomes easier to find out what are the different steps when creating a diagram, which ones you can skip, which ones interest you.

Structured organization and titling

Finally, by digging into current GMF documentation, I discovered that GMF has lots and lots of resources to help people to leverage it. There are several tutorials, including one that includes Flash videos to show how to generate a diagram. This is a very-high value resource, but it was unfortunately not very easy to find. This resource deserved to be highlighted! A sad thing is that (AFAIK) there is currently no way to integrate these movies into the wiki. I hope Bug 352735 gets fixed soon so that people will really benefit from these tutorials without effort.

If you know anything you could improve to the refreshed documentation, please do it. Remember of your first experience with GMF or of the first time you met a classical use-case: How did you search answer? How could the answer be easier to find? How could the answer be easier to understand? What in the documentation makes it less efficicent? If you have ideas, feel free to edit the documentation accordingly.

I learnt a few lessons from this work:

  • Very new users are really the people to target for tutorials.
  • For tutorials, a video is more attractive and efficient than 5 pages of documentation.
  • Do not try to be exhaustive in tutorials, prefer being modular or pluggable, so that the tutorial remains easy to follow, and that people can extend it by adding a sub-sub-title. It will make the tutorial easier to maintain, without making it easier to read (people are able to skip your “tutorial modules”)
  • Make your tutorial incremental. Resolve use-case one after the other. Be very explicit about the use-case. People need to understand where and why you go before reading the “How”
  • Sometimes, there is alreadly a lot of documentation available, there is no need for more. Instead, it can be useful to spend time on organizing documentation and making it more visible. A documentation that nobody finds or reads is very sad, it is waste.

These are my very first steps with documentation, and my first thoughts after working a little bit on GMF documentation. There are probably some things you’d like to tell me on this topic according to your experience. I’d be glad to learn from you, please tell me 😉

11 Comments

Back on Grenoble Demo Camp

The first Eclipse DemoCamp in Grenoble took place on Tuesday. With 25 attendees, it was a very good opportunity to meet people who are well-known in the Eclipse community, but also some new people who start using Eclipse to develop plugins to resolve very interesting use-cases.

Here is a small summary of the event (Thanks to Adrian for the pictures).

Agenda

First, Adrian welcomed us at the Xerox Research Europe castle. A very nice place!

Then the event was made of 2 parts.

Eclipse projects

The first one consisted in presenting new stuff at Eclipse. Of course, Indigo, but also to give news about some other projects.

I started by giving some insight about “What’s new in Indigo?” to the audience, and then to present a demo of WindowBuilder. I hope I convinced almost everybody to use this project I personally love!

Me presenting the Runtime Packaging Project

Then my former dear colleague Aurélien prensented and demonstrated to the audience how they could leverage the Memory Analyzer tools to resolve memory issues in there applications. Slides are here.

"You need a snapshot of your memory"

Next, Vincent , my new colleague since I joined PetalsLink, presented 2 projects of the SOA landscape at Eclipse: the SCA editor, and the BPEL designer, which is coming back to life at Eclipse and is going to join the SOA top-level project very soon.

Vincent explaining what is SCA and demo'ing the editor

Aurélien and I closed the first part of the event by presenting an overview of the Modeling stuff at Eclipse. I liked presenting it, so that I submitted a presentation about Modeling at Eclipse to Devoxx, if it gets accepted, it will include more demonstrations and will be improved thanks to the feedback people gave us during the DemoCamp!

It was the time for a break! Adrian came with beverages and very good food such as macaroons. I love macaroons. Unfortunately, there is no picture of this break, but people really looked satisfied of speaking one and other, and of drinking and eating.

Case studies

The second part of the DemoCamp was dedicated to case-studies: showing to people what people do with Eclipse and how they achieve their goals.

First one to present a case-study was Aurélien (again! 😉 who highlighted the main Modeling features of the “Best Eclipse Modeling ApplicationBonita Open Solution, and who explained what are the tricks used by Bonita to customize GMF Editors. See slides.

Aurélien and Bonita Open Solution

Next, Marc Dutoo (my first Eclipse mentor who made me a committer on JWT while I was a trainee) from Open Wide presented us the EasySOA research project, in which one the leverage several Eclipse SOA technologies to make consumption of services easier.

Marc explaining the goals of EasySOA

The following presentation was a presentation of Xeproc, a model that they use at Xerox to process documents. Thierry Jacquin explained us the use-case of Xeproc, which they use to discover how documents are structured and extract some meta-informations from them, and Adrian explained us how he plans to make it interacting with several SOA projects at Eclipse, using Mangrove.

Adrian explaining how Mangrove can be used as a pivot for all SOA technologies

Both last presentations were proposed by guys from IsandlaTech. and came to present solution for their daily work with Eclipse. Olivier Gattaz started by explaining us that they use a lot the spellchecking in Eclipse, mainly to write documentation; and started by telling us what are the limit of current spellchecker in Eclipse for his use-cases. Then he introduced us the Hunspell4Eclipse project, available on Eclipse MarketPlace, that provides an implementation of spell-checking in Eclipse that is the same as in Firefox or Libre Office. It was very interesting, and doing this work, Olivier discovered some issues in the JDT editor that I hope will be fixed one day! (slides)

Olivier convincing us to use Hunspell4Eclipse

And the last speaker was Thomas Calmant, who demonstrated us the ReST Editor, which is a very smart (maybe the smartest) editor for reStructuredText, a language to create documentation widely used in the Python community. This editor is full of very nice features that makes editing of reStructuredText much more comfortable that with a basic text editor. Click here for the slides!

Lessons I learnt

  • Eclipse DemoCamp are cool events to meet people
  • Eclipse DemoCamp are cool events to discover new Eclipse use case
  • Eclipse DemoCamp are cool events to be a speaker
  • Eclipse DemoCamp are cool useful for the life of the community
  • Modeling is not an easy thing to present, but it is quite interesting to do it. Everyone likes at least one thing in the Modeling landscape at Eclipse.
  • Xerox offices looks like an holiday center 😉
  • It is not always for people who develop plugins to find how to get an influence on big Eclipse projects. People who have been involved for a while do know that everything starts with participating in forums and opening bugs, but it is not so obvious for newcomers. It is our role as member of the community to guide them, and to recruit them in the community. DemoCamps are perfect events for that.
  • According to the audience of this DemoCamp, a lot of people really like GMF, but find this very difficult to use it. The documentation is very weak compared to the power of the project. That’s why I spent some time refactoring the Tutorial. The objective is to make GMF an easy-to-use project. You feedback is welcome.

Thanks to everyone who participated to this event! See you next year (and why not before) !

7 Comments

Grenoble Eclipse DemoCamp is next week!

Hi everybody!

For those who are living in French Alps, in Lyon area, or even in Geneve area, here is a reminder for an important event that takes place for the first time in Grenoble: an Eclipse Demo Camp! We’ll celebrate the latest release of Eclipse: Indigo.

Xerox Research Center Europe, where the DemoCamp takes place.

With this Demo Camp, you’ll have the opportunity to see some demos of what’s new in Indigo, some presentations and demonstrations about some famous tools of Eclipse you might never had the opportunity to try, and last but not least, to chat together between people who are interested or involved in Eclipse development in the area.

The event takes place on Tuesday June 28th, during the afternoon. You can register either on the wiki page of the event, or using the EventBrite ticket system. For the hottest news, you can foolow the Twitter account of the DemoCamp. Amd rememver: all that for FREE!

I hope I’ll meet some of you there!

1 Comment