For decades, software development has been done manually.From punching cards in FORTRAN to writing distributed systems in Go, the discipline has remained fundamentally the same: think deeply about a problem, come up with a clever approach (i.e., algorithm) and give the machine a set of instructions to execute. This method, which could be called “explicit programming,” has been integral to everything from the mainframe to the smartphone, from the internet boom to the mobile revolution. It has helped create new markets and made companies like Apple, Microsoft, Google and Facebook household names. And yet, something is missing. The intelligent systems envisioned by early Computing Age writers, from Philip Dick’s robot taxi to George Lucas’s C-3PO, are still science fiction. Seemingly simple tasks stubbornly defy automation by even the most brilliant computer scientists. Pundits accuse Silicon Valley, in the face of these challenges, of veering away from fundamental advances to focus on incremental or fad-driven businesses. That, of course, is about to change. Waymo’s self-driving cars recently passed eight million miles traveled. Microsoft’s translation engine, though not fluent in six million forms of communication, can match human levels of accuracy in Chinese-to-English tasks. And startups are breaking new ground in areas like intelligent assistants, industrial automation, fraud detection and many others. Individually, these new technologies promise to impact our daily lives. Collectively, they represent a sea change in how we think about software development - and a remarkable departure from the explicit programming model. The core breakthrough behind each of these advances is deep learning, an artificial intelligence technique inspired by the structure of the human brain. What started as a relatively narrow data analysis tool now serves as something close to a general computing platform. It outperforms traditional software across a wide range of tasks and may finally deliver the intelligent systems that have long eluded computer scientists - feats which the press sometimes blow out of proportion. Amid the deep learning hype, though, many observers miss the biggest reason to be optimistic about its future: deep learning requires coders to write very little actual code. Rather than relying on preset rules or if-then statements, a deep learning system writes rules automatically based on past examples. A software developer only has to create a “rough skeleton,” to paraphrase Andrej Karpathy from Tesla, then let the computers do the rest. In this new world, developers no longer need to design a unique algorithm for each problem. Most work focuses, instead, on generating datasets that reflect desired behavior and managing the training process. Pete Warden from Google’s TensorFlow team pointed this out as far back as 2014: “I used to be a coder,” he wrote. “Now I teach computers to write their own programs.” Again: the programming model driving the most important advances in software today does not require a significant amount of actual programming. What does this mean for the future of software development?
- Programming and data science will increasingly converge. Most software will not incorporate “end-to-end” learning systems for the foreseeable future. It will rely on data models to provide core cognition capabilities and explicit logic to interface with users and interpret results. The question “should I use AI or a traditional approach for this problem?” will increasingly come up. Designing intelligent systems will require mastery of both.
- AI practitioners will be rock stars. Doing AI is hard. Rank-and-file AI developers - not just brilliant academics and researchers - will be among the most valuable resources for software companies in the future. This carries a touch of irony for traditional coders, who have automated work in other industries since the 1950s and who now face partial automation of their own jobs. Demand for their services will certainly not decline, but those who want to remain at the forefront must, with a healthy dose of skepticism, test the waters in AI.
- The AI toolchain needs to be built. Gil Arditi, machine learning lead at Lyft, said it best. “Machine learning is in the primordial soup phase. It’s similar to database in the early ‘80s or late ‘70s. You really had to be a world’s expert to get these things to work.” Studies also show that many AI models are difficult to explain, trivial to deceive and susceptible to bias. Tools to address these issues, among others, will be necessary to unlock the potential of AI developers.
- We all need to get comfortable with unpredictable behavior. The metaphor of a computer “instruction” is familiar to developers and users alike. It reinforces the belief that computers do exactly what we say and that similar inputs always produce similar outputs. AI models, by contrast, act like living, breathing systems. New tooling will make them behave more like explicit programs, especially in safety-critical settings, but we risk losing the value of these systems - like AlphaGo’s “alien” moves - if we set the guardrails too tightly. As we develop and use AI applications, we need to understand and embrace probabilistic outcomes.