The Rush to Open Source – But What’s Missing?Jan 29, 2019 | Jones EST
A cautionary tale of navigating modern development
Search the Internet for “Open Source” and the number of hits is astonishing. The fact that open source has been accepted into the business mainstream is indisputable, as IBM’s purchase of Red Hat for US$34 billion proves. Or maybe it’s the rise of more or less open source Linux in its various distros, especially in servers, in the cloud, and in a large number of other devices that has fueled this movement. Certainly, businesses and governments have recognized that being able to change and adapt source code themselves is a benefit. The entire Internet was and still is largely developed using open source, and so it would seem that not only is open source here to stay, the future of computer science in part depends on the wise use of this software.
But what’s missing? Why do companies that cut costs in a relentless drive for efficiency still choose proprietary or commercial software? What are the benefits of using open source, and what are the pitfalls and concerns? What is missing if you, as a software architect, a designer, a developer, or engineer choose to use open source software?
I want to offer answers to these questions using my own personal experience as Chief Technical Officer of an innovative start up company that faced serious funding and schedule pressures (as most do). Since free and open source (F.O.S.S.) tools have become mainstream, there are best practices to take advantage of open source and limited commercial versions, while solving some of the more unnerving challenges developers and businesses face today with open source.
In 2007 I decided to convert my hardware and software design services company into a product company to deal with the great recession and to obtain venture funding. Our new company decided to focus on telehealth for chronic disease patients, and included sensor devices, an innovative tablet platform built from an open source hardware reference design which we extended significantly.
We also created a complete electronic health record/electronic medical record web-based Clinician Access System to be able to set thresholds for the incoming telemetry to trigger alarms, and to provide a database for the billion or so bits of tiny data coming in from as many as a hundred million potential patients. The system was developed with a mixture of PHP, open source MySQL, and other technologies – some home grown, some open source.
Like all good development shops, we were constantly driven by process improvement, seeking ways to reduce our workload, and especially driven by a need to ship the product, start selling systems and licenses, and reach profitability. In order to meet schedule demands, we had to innovate, and we had to be smart about what we invented, and what we borrowed.
We all know that the open source world has come a long way and has become an entirely mainstream alternative for a lot of projects, and even some real products (especially those using GCC, Linux, and so on). But would there have been a better approach, perhaps a hybrid approach, that would deliver the benefits of open source (faster prototyping and demos) but with the robustness and technical support of a commercial offering?
These days, hybrid models are popular. For example, hybrid Cloud Computing that includes some servers inside a company (or a private Cloud) combined with a public Cloud make an enormous amount of business and technical sense, so much so that companies such as IBM, Dell, and HPE have embraced this as their primary Cloud selling strategy. Is there an analogue here for the web and mobile developer?
Open source licenses can be tricky
Learning to deal with various types of open source licenses was a challenge, from GPLv2, GPLv2, to Apache and BSD. Because we modified the design of our hardware reference platform (which was an open source project), we knew that any changes we made to adapt the various open source bits had to be submitted to the various open source projects, which was always a concern, especially for our investors
Our software stack had Open Boot firmware, Angstrom Linux (later we changed to Android), a mixture of open source and proprietary device drivers, our own proprietary sensor engine to control various medical equipment devices, and we used a browser on the tablet platform for our look and feel (perhaps one of the first uses of browser technology in an Internet of Things realm to control the user interface). We had a mixture of open source and home-grown UI widgets. We had to invent a way to keep track of which source repository had which code, and who was entitled to our modifications. With the release of GPLv3, some of our open source concerns would have been alleviated, but the fact still remains that different kinds of projects in GitHub or other open source repositories use(d) different licenses. These days there exists GPLv3, MIT, copyleft, and others. Not every project uses the permissive MIT license.
What we would do differently: lessons learned
Implications for raising capital
The concern about which code we owed back to the open source community, and which source was proprietary (if not a trade secret), affected our ability to raise capital, and was not an easy task during this time. Investors want to invest in companies that have clear competitive advantages, such as a unique business model and, for a software intensive company, software intellectual property (IP). But using open source meant giving up some of our inventions. We spent a considerable amount of time trying to work around this, but the fact remained: our unique software used some interesting modifications to Linux device drivers, and that had to be contributed back to the community – where our competitors could see what we invented. That’s why using a hybrid model made sense. Today this concern is reduced by an MIT license attached to a project, but of course not every open source code base uses the MIT license. The fact of the matter is, you must keep track of all licenses.
We found that one of the best uses of open source was in creating a working Proof of Concept (POC). To better explain to investors our concept and our unique innovations, a presentation deck was really no substitute for a POC that showed how the system would work. This POC allowed potential investors to kick the tires and learn how our solution, based somewhat on open source but with a substantial amount of our own software intellectual property, was best suited to take advantage of the coming telehealth and telemedicine explosion. If investors believed that we could pull this off with very few employees by leveraging open source, we believed we had an advantage – so long as we protected the inventions, the intellectual property we were developing.
By promising them that we would re-write our “secret sauce” (software innovations) and create a protected source environment for those, we mollified the anxieties of our potential investors. We also created the potential to derive licensing revenue from our intellectual property. There is no way a company can or should license open source, because who would pay for something that could be obtained completely free? Naturally we had to organize our source code to do this.
In general, the fewer sources of code, the easier it is to keep track of to whom you owe code back to. Carefully annotating your code is wise in case someone in the open source world insisted that we contribute back to the repository.
Open source giveth, open source taketh away: support and documentation falter
We ran into difficulties with some of our open source software used to perform video encoding and decoding for video conferencing. Unfortunately, our processor vendor couldn’t help us: they were relying on the open source community to provide basic functionality and technical support for multimedia, and at that time the community’s code was almost non-functional. The open source multimedia framework got us about 80% of the way to completion, but then just dropped us off the face of the Earth. We couldn’t seem to make it all work in the way our microprocessor vendor said it should, and technical support was by people uninterested in our particular application of the technology.
Asking questions on various forums didn’t get answers very quickly, if at all (such as how we program the process to do 30 frames per second at VGA resolution with good performance). This is another perverse example of the 80/20 rule: you get to a certain point in 20% of the time with 80% of the functionality, but the rest of the 20% takes 80% of the time!
About documentation: the comments in the source code didn’t really cut the mustard. Documentation during those years was slim to non-existent, and of course there was no one answering a phone call with a technical support request. If I had used a commercial vendor I could have relied on the excellent documentation, product examples, and technical support instead of waiting (sometimes forever) for answers to my most vexing questions. There were literally weeks that we didn’t get answers from the open source communities, and while this has really improved over the years since we created this enterprise-level system, being able to pick up a phone and get answers quickly would have saved a lot of time.
Examples and Sample Code
Using open source developer tools such as GNU GCC worked well, but of course there were no example or sample programs to adopt for our application code. We tried doing all the work on the back-end server and used an embedded web browser to display the screens, but this hampered our ability to handle hundreds of thousands of transactions; the state of the art at the time simply didn’t allow for React or Angular responsiveness. These days, this technique of using a browser on the client side of a high-end embedded tablet device, or better yet progressive web applications, would be the right approach. We were just ahead of our time, often a lethal position in engineering!
The user interface design we created was innovative, especially for elderly chronic disease patients, but it took a very, very long time to design and code all of the widgets. From the Clinician Access System perspective, medical professionals in the chronic disease market are used to a certain look and feel for their graphs, their charts, and the data. Because we were accumulating a very large amount of small bits of data (medical telemetry and other measurements), it was important to be able summarize data for quick viewing and understanding by clinicians. It would have been much smarter to have used a product like Ext JS, ExtReact, or ExtAngular, and would have saved us many hours (months) of design, coding and testing.
Sencha has extended the open source frameworks of React and Angular into areas that can be readily leveraged by developers. For example, ExtReact and ExtAngular products provide over 115+ user interface components each and were created to work with React and Angular.
Getting started with prototyping using free downloads is a great way to quickly experiment with technologies. All of Sencha’s products offer free trial versions, and recently Sencha came out with the Ext JS Community Edition, which makes a lot of sense for prototyping as well. Had I done this with Waldo Health, I would have been able to show my investors real progress and the vision of the products and services. I could have then easily switched to the fully supported, commercial/professional grade version and not lost compatibility. Commercial companies like Sencha offer superior technical support, professional services, documentation, and consistent testing against new versions of browsers and new versions of Android and iOS.
How to leverage Ext JS Community Edition
Would I do it all over again? Yes, but I would likely choose better technologies backed by a real company for technical support, documentation, examples, training videos, and so on. The hybrid model of using open source for non-differentiating technologies combined with the lessons learned here would have likely resulted in a more successful company.
Written by Jones EST