Until about fifteen years ago, the computer industry oscillated slowly and predictably between two extremes: Alternate generations of technology were dominated by either small, single-user devices, or room-sized, multi-user behemoths. In the 2000s, we witnessed a fundamental change whose implications are only now becoming apparent. Let’s look at some history, then see where the industry seems to be heading.
Computational devices a hundred years ago were mostly gadgets based on 19th century designs, like adding machines, cash registers, punch clocks, and typewriters, that supported only one user at a time. (There were also analog computers—some were even hydraulic—but those suffered a kind of mass extinction as digital electronics caught on in the 1940s). Mechanical devices gave way to less portable (but still single-user) computers like the Z3, then to huge machines like ENIAC operated by small teams of people, and ultimately to enormous time-sharing systems like the IBM mainframes of the 1960s.
The beeper's gonna be making a comeback. Technology's cyclical.
—Dennis Duffy, 30 Rock
In the 1970s, the pendulum reversed direction, as closet-sized servers gained popularity at a tenth the cost of mainframes. (Nowadays, even mainframes—which have been rebranded the IBM Z series—are about closet-sized.) By the 1980s, personal computers fit on desktops. People started calling room-sized computers “data centers” to avoid confusion.
In the 1990s, demand for World Wide Web sites pulled the pendulum back toward rooms full of hardware serving lots of people at once. In 2004, the introduction of Amazon Web Services (AWS) made centralized computers available as a utility, almost like electricity or running water.
In the late 2000s, Apple and Google introduced iPhones and Android. Though nominally telephones, smartphones are of course general purpose computers. The introduction of smartphones tugged the pendulum back (yet again) toward small, single-user devices; but this is when things got weird.
Instead of competing for marketshare, the big multi-user computers and the small personal ones actually complemented each other. After seventy years of big computers getting bigger and small ones getting smaller, the gap was finally big enough that they were seen as disparate platforms, best suited to running entirely different kinds of software. The more people use smartphones, the greater the demand for online services; and the more online services become available, the greater the worth of your smartphone.
The death of the pendulum had staggering consequences. As Barron's reported in 2011: The heavens shook. For example, the growing gap between big and small machines left room for new form factors like tablets to become popular. The conventional wisdom had been that people would keep wanting smaller versions of traditional form factors, so PC companies were producing uncomfortable little things called netbooks. The failure of netbooks left established players scrambling.
Form factors weren't the only thing that changed dramatically. Data centers became profit engines, while small computers became the height of fashion. They are now more portable and personal—even wearable—than ever.
Now that we've caught up to the present day, what can Software Engineers look forward to in the coming 10-20 years?
Custom hardware, and lots of it. Traditional applications targetted broad device classes like “server” or “desktop.” Going forward, we’ll either have to hyper-specialize, or become truly device agnostic. We’re already seeing multiple form factors, like smartphones and tablets, lumped together as “mobile;” and the back-end/front-end distinction of the early Web is giving way to “full stack.”
Compile-time abstractions. Expect to spend more time waiting for your code to build, in exchange for running efficiently on a wide range of (potentially tiny) devices. Minimizing power consumption will be increasingly important.
Heterogenous, distributed, and parallel architectures. Concurrency is likely to be so abstract that the programmer doesn't know whether any given function call will run on the CPU, the GPU, a special purpose coprocessor, or some other machine on the network.
Standard byte-code and system interfaces. The obvious front-runner for the byte-code is WebAssembly, but its system interface is only in early development. Leading alternatives include .NET, the Java Runtime Environment (JRE), and Erlang/Elixir/BEAM; but those all drag along object models and other baggage that aren’t really suitable for ubiquitous computing. Unity is actively working on a more suitable version of C#/.NET, and they seem pretty pysched about it.
It’s not surprising that the modern computing industry, still less than a century old, has already seen massive changes. It’s fascinating, though, that the nature of that evolution itself continues to change.
i'm betting on going agnostic. so, instead of building for native, i'm building for the web exclusively. i might be badly wrong on this.