I’ve been invited by Benjamin Cabe to be part of the Program Committee (PC) for EclipseCon France. Benjamin told me it wouldn’t take too much time and I thought it would be something interesting. Benjamin was partly right (it does not take too much time, but it definitely takes some time), and I was totally right to think it would be a good experience.
Here are some additional thoughts and explanation about how a Program Committee works
It’s been a few week since EclipseCon ended. It’s now time for a retrospective.
First, the event was great. Although I did not have time to view that many presentations, I was able to discuss with a lot of people about a lot of topics. I learned new stuff, and shared some knowledge. I made huge steps ahead in my own work, and helped people to go ahead in their own work too. I was also able to boost some projects I like and work in making them more widely used.
So I am pretty happy of this conference. Being at EclipseCon has and will improve my daily work, that’s the best thing to expect from this kind of event.
My Top #3 favourite talks
Since I spent most of my time chatting with several people, I could only see a few presentations. I’d like to share 3 of them:
- Persona non grata by Brian Fitzpatrick.
I like this idea to bring some practices coming from marketing into software development that are aimed to get a better knowledge of what your users want. Introducing Personas seems to be a high benefit for a team, since every participant will be able to talk about these personas instead of limitating users to only a role. It helps to have a clearer idea of what their users are like and can refer easily to them in their daily work. It makes product teams think more about their users being humans and act consequently, for instance when developing UI.
- CodeRecommanders, by Marcel Bruch
CodeRecommanders won the Best Developer Tool award, this talk demonstrated why this project deserves such a recognition. It is pretty easy to set-up, use and provide immediately a boost in your code-writing productivity. Code Recommanders has analyzed most of the open-source Java code on the planet and so comes into your IDE to provide you snippets, templates, examples, pieces of advice and more, gathered from all the code it has analyzed! You can save a lot of time since you do not have to search through the web anymore: CodeRecommanders has snippets and pieces of advice for you. Always.
- How to profit from static analysis, by Elena Laskavaia
This talk explains why static analysis makes you save (a lot of!) development time and money, and how to get the highest value from it. How not to spend more than necessary in setting it up, and how to get started in making static analysis part of your strategy. Very concrete lessons and tips! Here are my 3 favorite notes:
- Detecting a bug at dev-time with static analysis costs a few seconds, detecting it at build-time costs minutes to fix it, detecting it in a satellite costs millions. (slide 5)
- JDT provides you very good static analysis, but most rules are disabled by default. So just turning these rules on will make you more productive. (slide 18)
- full JDT analysis + PMD enabled in your IDE = profit.
What I presented
This year, I had the opportunity to be a speaker for 3 talks
How to remain agile in code generation approaches?
This short presentation during Modeling Symposium explains some good practices to use in order to avoid being locked in code-generation approaches. Indeed, the indirection introduced by code generation can have some pitfalls when it comes to customizing generated code. But actually, you don’t want to customize generated code, you want to customize generated behavior. Modifying generated code is not the best solution for that, here are some more sustainable approaches:
The lesson is: consider generator config/generated code the same way you consider source code/compiled code.
Some news of GMF Tooling, at Modeling Symposium
If you read this blog, you probably know most of it.
Get ready to fight your technical debt, with Tycho, Sonar, and Jacoco
I had the opportunity to present, together with Xavier Seignard, how easy it is to integrate Tycho, Jenkins/Hudson, Sonar and Jacoco together. This provides a 1st-class build environment that makes you go from continuous integration to continuous improvement. We think code quality is now pretty easy to get, even in the Eclipse world, so we should not do without it any longer. There are now very very very (…) very powerful and easy tools for quality in the Java world. They work with a few efforts and minutes. That was the aim of this talk: showing people that they are very close to continuous improvement and giving them some keys to set it up by and for themselves.
I was able to also make some lobbying on Sonar@Eclipse.org, so that there is progress on bug 360935 ! Woot!
Things I did
EclipseCon, like any conference, is mostly a social event. It is a great opportunity to meet people you need to help you, to meet people who need your help; and to get things actually done and gone ahead.
Tycho is the masterpiece of builds for projects I work on. With JBoss Tools, we have found some bugs, we often have some new requirements and ideas for improvements. Participating in the Tycho BOF was a great opportunity to chat with the other guys doing/using Tycho and to highlight some issues that are important to us. It’s worked and we are now pleased to see and provide “quality patches”, as Igor likes to say.
The Eclipse foundation is making an effort to improve the “Common Build Infrastructure”. The goal is to provide an efficient and easy-to-use build environment for Eclipse projects. This is something very interesting for Eclipse.org projects, but also for consumers such as JBoss Tools, since we have ideas on making Eclipse projects easier to consume. So that’s something we follow closely and we are happy to see all these efforts carried out to push Tycho and Jenkins usage for all Eclipse.org projects!
Let’s hope CBI will also address issues regarding p2 repository governance (lifecycle, availability, URL conventions…). As I am writing, a bug has just opened on this topic. Coincidence or esoterism?
I could see the presentation by Andres Alvarez Mattos and Ruben De Dios during Modeling Symposium. These guys made a very nice editor for GMF Tooling models that allows you to create a diagram editor graphically. It is not intrusive and is a much more efficient way to get started with GMF Tooling than the current approach based on models and tree editors. This editor simply looks like a renewal of GMF Tooling.
I helped them to get some of their patches applied into GMF Tooling code. I hope I’ll be able to nominate them as committers soon, since their editor would make a lot of sense in GMF Tooling.
That was a busy EclipseCon! But I enjoyed it! Some things got done, some can now be done, and some others will be done soon. There are some concrete results and some new opportunities, it’s everything I expected!
I don’t know if you know it, but I love Jacoco. Jacoco is a very neat and easy to use coverage tool. You can easily use it with existing Java applications, it is just about giving a -javaagent to your JVM parameters. It is so easy to have coverage reports as it is to increase a PermSize.
Everybody loves Jacoco. Those who don’t love it yet simply need to give it a try to get convinced Jacoco in the only good tool for code coverage.
We did enable Jacoco for our our Tycho-based builds of JBoss Tools and JBoss Developer Studio components thanks to its Maven plugin, so we get some very nice jacoco.exec files that developer can import into their Eclipse workspace thanks to EclEmma. Then developers can see how their tests cover their code. That’s very nice.
But I think that measuring unit test coverage is not the best thing we can do with Jacoco. Since it is just about a simple argument to the Java command-line, we can think of… Enabling Jacoco by default on real execution of Java applications to track real usage and deduce which pieces of code are really used! This is pretty easy to do, and the effect on performances is low enough to be acceptable in a lot of cases, especially on desktop applications.
Knowing which pieces of your software are critical in production, and which ones are useless and should get removed is a priceless information that for sure helps you in making your software better. That’s a topic we are working on for JBoss Tools and JBoss Developer Studio, and we got some ideas on how to make it real and already have ideas of improvements to have a better control over Jacoco.
Stay tuned. soon all Java applications will use Jacoco!
That’s now a few weeks since I joined the JBoss Tools and JBoss Developer Studio team and started working on build. JBoss Tools is a HUGE amount of code, with about 35 components (or modules in Maven terminology) that are aggregated in a way that can be compared to the Eclipse release train, and that all use a “Common Build Infrastructure” based on Maven/Tycho to perform build and Jenkins to trigger it.
There are a lot of improvements in the pipe for Nick and myself to make build more and more agile, and to make it produce more and more interesting results. I feel very interested in going a step ahead of continuous integration, and open the road towards continuous improvement, automating QA to get reports about static analysis and code coverage on each build. I bet it will help developers to avoid mistakes and improve there code. (Advertising: You can learn more on this topic during next EclipseCon ).
Here are some of these topics that are worth this blog post:
QA with Sonar
About static analysis, my choice fully goes to Sonar, a webapp that provides a dashboard aggregating reports from several QA “services” (Findbugs, Checkstyle, Test reports, coverage…). Sonar provides nice integration with Maven and Jenkins & Hudson, and getting the whole working together is about a few minutes of configuration. The return on investment when setting up Sonar is very high and immediate. In my humble opinion Sonar is a must-have nowadays.
So, there are 2 bugs I encourage you to look at/vote for, in order to leverage Sonar, QA and continuous improvements in your favourite projects:
- @Eclipse.org : Set up a Sonar instance of Eclipse projects
- @JBoss.org: Set up Sonar for JBoss Tools (and more JBoss stuff?)
Jacoco, without Sonar
Jacoco is a very easy-to-set-up code coverage tool, with a very convenient Maven integration. With Jacoco, there is no need for instrumentation step to get coverage results, all is about giving a -java.agent=jacocoagent.jar property to your JVM. While using it, I did not detect any effect on performances. That’s pretty cool.
The current issue for Jacoco adoption is the reporting. Here are the known ways to analyse reports from a Jacoco coverage output file (aka jacoco.exec):
- Use the Jacoco Ant task to generate HTML reports
- Use the Jacoco maven:site integration to generate reports as part of maven:site
- Une EclEmma plugin to analyse Jacoco reports in your IDE
- Use Sonar
As a fan of Sonar, I think the Sonar-based approach is the best one. But both previous bugs show something clear: despite of my full enthusiasm for Sonar, community-wide infrastructures such as Eclipse.org or JBoss.org cannot set up a Sonar instance immediately. I guess this is because of issues with credentials, or hardware, or IT stuff. So we need to be able to consume Jacoco reports without Sonar.
The issues with all these approaches is that it introduces the need for a new build step or a new developer tool to analyze reports. The main dashboard for JBoss.org builds in Jenkins (and Hudson for Eclipse.org), then the ideal place for such reports would be a Jenkins plugin for Jacoco. Unfortunately this does not existt (yet), but a Jenkins issue is open for this. Now that I have lost some optimism in getting Sonar available for Eclipse.org or JBoss.org community quickly (as an impatient guy, quickly means today), this Jacoco Jenkins is now my #1 wish in the whole Java software world. Although setting up Jacoco for execution is very easy, setting up integrated reporting is still almost a blocking point for adoption.
So please, vote for this bug!!! iJacoco Jenkins plugin
Some other stuff
Since I’m writing here a wish list, I’d also like to share with you thib Jenkins issue I opened: Add an option to select Build status when no Test is found. The idea is simple: Even if no-one should never do that, it happens that we need to run a Jenkins build with skipped tests. Ok, it’s bad, but it happens. In such case, the fileset for Test results in Jenkins JUnit plugin does not match any existing report, and then build turn to FAILED/Red. When you have cascading of jubs, this is pretty annoying, since you’d rather see this build UNSTABLE/Yellow -my favourite- or SUCCESS/Blue. So this issue simply requests the ability to change the result of plugin when no test is found.
If you find this useful, you can vote for it.
Today is my last day working for PetalsLink.
Working for PetalsLink was a quite interesting experience:
On the technical side, I enjoyed moving all the XML-based tooling of PetalsStudio to a more powerful EMF-based approach for Petals JBI editor – for those who don’t know JBI, it is a standard that allows to define SOA artifacts in your ESB. Moving to EMF allowed us to provide better tooling faster, because most of the complexity in manipulating JBI can be removed with very few efforts leveraging EMF ExtendedMetaData. That was the first time I faced this part of EMF, and I got pretty impressed of how well it works (working with EMF always gives this impression of “being well”). I also improved the ability to plug new JBI components into the Studio, which is a critical point when you have to deal with connectors for almost everything – Mail, SFTP, Talend, XSLT…. So that was an interesting challenge in term of conception and development.
Petals Studio was also the pretext to start using Git, GitHub and Sonar. I am pretty happy to have learnt these 3 tools that clearly improved the way I work.
Also, I had the great opportunity to work closely to several Eclipse projects:
- I could contribute the Tycho build of GMF Tooling, put it on Hudson, get source moved to Git/mirrored to GitHub, improve wiki… GMF Tooling is a project I’ve used for 3 years now and I often saw in it some critical organization points to improve to make it more dynamic in term of development. Working at PetalsLink gave me the opportunity to do what I think was necessary to keep the project healthy. With the help of Michael Golubev, I now think this was an real success.
- I could contribute to Nebula the TreeMapper widget, which will probably have some very interesting use-cases soon. As I became a committer, I also helped in improving Tycho build and CI, nad it seems like the project liked it if we look at the new p2 update-sites.
- I contributed some small improvements to Eclipse BPEL designer, tried (unsuccessfully) to make SWTBot use Tycho, and developed a useful extension for Draw2d.
The only thing I wish I would be able to do here is to push ahead the usage of Sonar at Eclipse, at least for GMF Tooling and Nebula.
But I probably learnt even more things from PetalsLink by discovering another company organisation that is very different from what I could experiment before (OpenWide and BonitaSoft): PetalsLink is focused on the Research about SOA and agility of Systems of Information. It is a wide topic! Petals products are quite good compared to other alternatives in the SOA landscape, but they don’t meet the success they deserve, it was a bit frustrating for a developer.
I enjoyed working for PetalsLink, all the expectations are fulfilled, so it is time for me to go ahead, to find a new experience, a new team, new challenges, new issues… I love discovering new things!
I’ll have the opportunity to work with a great team! My main occupation for the next monthes will be to assist Nick Boldt in making JBoss Tools CI and build infrastructure better and better. I’d also like to open the road towards efficient QA for JBoss Tools, including -among other- usage of Jacoco and Sonar. Then I’ll also work on developing nice stuff for some JBoss Tools modules, most probably on the SOA/BPM part.
That’s gonna be a lot of fun! I’m eager to be tomorrow and actually get started for this new team/employer/project/product/users.
Let’s keep in touch via this blog and twitter
This is a blog post to sum up some of my thoughts I shared on the Nebula-dev mailing list with Wim Jongmam and later on Twitter with Zoltán Ujhelyi and Dave Carver about naming build types at Eclipse.org and scheduling them.
It is a topic open to debate. My goal with it is to find out practices and names that are relevant and useful for community (contributors and, mainly, consumers) and also to give food for thoughts about how to deal with builds.
Historically, Eclipse has 4 to 5 classical qualifier for binary artifacts:
This wording is specific to Eclipse, only Eclipse people do understand the meaning of it. Even worse, some of these are not used accurately. Although this is more or less official wording, I am not sure it is used by most projects.
Now Eclipse.org provides continuous integration, to automate and manage builds executions, and most of them happen whenever a change happen on your VCS. Continuous integration has highly changed the way binary are produced and made available to consumers, it is now much easier to get builds, lots of project have a ping of less that 10 minutes between a commit and a build ready to be released.
That was the starting point of my thoughts, with the Nebula build job: Why calling nightly a job that does run on a build any time a commit happen?
Requalifying binaries to make consumption clearer
As a producer of builds, here are my opinion on these qualifiers:
- Release: The heartbeat of the product for consumers
- Maintenance: is a release, but with 3rd qualifier digit different than 0
- Stable: Is nothing but an integration build that was put on a temporary update-site for the release train. And to be honest, I don’t use it, I directly point the latest good continuous integration builds to the release train builder
- Integration: The heartbeat of the product for developers, built on each commit
- Nightly: What is in it? What does nighly mean at the same time for a developer in Ottawa and a developper in Beijing? Who cares that it is build nightly and not a 2pm CEST? For most projects, the nightly built artifacts are not at all different from the ones that would have been built with the last commit. What is the added value of artifacts built during a nighly scheduled build over the latest artifact built from last commit? Why building something every night if there was no commit in it for 3 monthes?
So I am in favor of removing Maintenance, Stable and Nightly. That gives
Wow, 2 kinds of builds, reminds me a tool… Oh yes! Maven! Maven has 2 types of binaries: Release and SNAPSHOTS. That’s great, we finally arrive to the same conclusion as Maven, except that what Maven calls “snapshot” is called “integration” in the Eclipse teminology.
But now, let’s consider the pure wording: why having 2 names for the same thing? How to call a binary that is work in progress? An “Integration” or a “snapshot”?
Let’s be pragmatic: This report tells us there are about 9000000 Java developers in the world. This onetells us 53% of this population use Maven. Let’s say that about 60% of Maven users do understand the meaning of SNAPSHOT. That means 9000000 * 53% * 60% = 2,862,000 people do know what SNAPSHOT means. Wayne confirmed me that the size of the Eclipse community is 132,284 people, who might know the meaning of “integration” in an Eclipse update-site name. That’s the number of people who have an account on eclipse.org sites – wiki, forum, bugzilla, marketplace. Even if we assume that 100% of them do understand the differences between the different qualifiers, and I am pretty sure that less than the 600 committers actually do, that makes that Snapshot is 21 times a more popular and understood word than integration.
So, Eclipse projects and update sites would be easier to understand and consume for the whole Java community by accepting the Maven wording, and using it on the websites and update-sites name.
Following the Maven naming and the release/snapshots dogmas would make consumption easier, but also avoid duplication of built artifacts, and also make things go more “continuously”. Your project is on rails, it’s going ahead with Snapshots, and sometimes make stops at a Release. That’s also a step ahead towards the present (see how GitHub works) of software delivery: continuous improvement, continuous delivery.
Requalifying build job to make production clearer
So now let’s speak about build management, no more about delivery.
You need several jobs to keep both quality and fast enough feedback, then set up several jobs!
Dave Carver reminded me about some basics of Continuous Integration: keep short feedback loops, one builds for each thing. That’s true. If you have a “short” build for acceptance and a “long” build for QA, you need to have separated jobs. Developers need acceptance feedback, they also need QA feedback, but they for sure cannot wait for a long build to have short-term feedback. Otherwise, they’ll drink too much coffee while waiting, it is bad for their health.
But do not create several builds until you have real needs (metrics can also help you to see whether you have need). If you have a 40 minutes full-build, that fails rarely, with slow commit activity, and that nobody is synchronously waiting for this build to go ahead, then multiplying builds and separating reports could be expensive for low Return On Investment. That’s the case for GMF-Tooling: we have a 37 minutes build with compile + javadoc + tests + coverage + signing, and we are currently quite happy with it, no need to spend more effort on it now. Let’s see how we feel when there will be a Sonar instance at Eclipse.org and that we’d have enabled static analysis… Maybe it will be time to split job.
Avoid scheduling build jobs, it makes you less agile
Before scheduling a job that could happen on each commit, just think about how long will be the feedback loop between a commit and the feedback you’ll get: it is the time that will happen between the commit and the clock event starting your job. It can be sooooo long! Why not starting it on commit and get results as soon as possible? Also why schedule and run builds when nothing may change between to schedule triggers?
I can only see one use case where scheduling job executions is relevant: when you have lots of build stuff in the pipe of your build server, and you have limited resources. Then, in that specific but common case, you want to set up priorities: you don’t want a long-running QA job to slow down your dev team who is waiting for feedbacks from a faster acceptance build. For this reason, I would use scheduling, but I would do so because I have no better idea on how to ensure the team will get the acceptance feedback when necessary, it is a matter of priority. Maybe inversting in more hardware would be useful. Then you could stop scheduling, and get all your builds giving feedbacks as soon as they can. Nobody would wait for a schedule event to be able to go ahead the project.
As I said to Zoltán on Twitter: “The best is to have all build reports as soon as possible, as often as possible” (and of course only when necessary, don’t build something that has already been built!). Scheduling goes often against those objectives, but it is sometimes helps to avoid some bottlenecks in the build queue, and then save time projects.
Name it to know what it provides!
Ok, at Eclipse, there are “Nightly” jobs. I don’t like this word. Once again, “Nightly” is meaningless in a worldwide community. And the most important issue are “What does this build provide?”, “Why is it nightly, can’t I get one on each commit?”.
If this build is run on each commit, then don’t call it “Nightly”, because it feeds the consumer with false information. You can think about having both “acceptance” and “QA” jobs, then put that in their names rather than a schedule info, that’s a far more relevant information.
Continuous integration has changed the way we produce and deliver software, we must benefit of it and adopt the good practices that come with it. Continuous integration is the first step towards what seems to be the present of near future of software delivery: continuous improvement and continuous delivery. We must not miss that step if we want to stay efficient.