We all remember the emergence of computers- writing lines of code for the automation of numerous business operations. Enterprise software were-in fact, still are written using functional and procedural languages like Java, Python, C, C++ and others. They gave instructions to the computer for automating manual jobs like payroll processes, inventory management etc.
Coders followed traditional waterfall models for meeting the business requirements. This was the time where human beings gave commands to machines. This time period is what known as Software 1.0 phase.
What is Software 2.0?
Fast forward to a decade back, enterprises started designing applications by automatically generating the code from business requirements. With advancement in DL( deep learning), we can build neural networks that can write software codes by automating instructions. Programming of function done using the neural network model is Software 2.0.
But what are these Neural Networks?
Neural networks are a set of algorithms, loosely modelled after the human brain, designed for recognizing patterns. They interpret sensory data by a type of machine perception, clustering or labeling raw input. The patterns they acknowledge are numerical, present in vectors into which all the real-world data, be it sound, images, text or time series should be translated.
Software 1.0 Vs Software 2.0
With high performance computing taking center stage today, developers have been encouraged for building analytical models to assist company leaders in their decision making process. Continuous research in Artificial Technology including DL neural networks has given birth to systems like AlphaGo, which has completely surpassed human skillset. This has prompted the managers to shift from Software 1.0 decision making system (like data warehouses) to Software 2.0 based neural network based decision system from Benefits of Software 2.0
Software 1.0 delivery deploys the whole application through a number of iterations by blending incremental software pieces with the existing deployment. On the other hand, Software 2.0-based solutions can adapt their logic dynamically based on the data generated, and execute model-based deployment.
In fact, we can envision the developers building Software 2.0 applications by just giving domain functions,input of features and initial weights to the neural networks.
With Software 2.0, we foresee an agile, more collaborative developer environment, leading to more efficient solutions.
Benefits of Software 2.0
Have a look at the numerous benefits, Software 2.0 offers to the programming world.
#1: Computationally Homogeneous
A neural network is a mixture of two operations- thresholding at zero( RELU) and matrix multiplication. Compare it with the instruction set of classical software, which is much more complex and heterogeneous. Since you just have to give Software 1.0 implementation for few of the core computational primitives( like matrix multiply), it is quite feasible to make various performance guarantees.
#2: Baking into Silicon is easy
As a corollary, since the set of instructions of a neural networks are relatively small, it becomes very easy to implement these networks much closer to silicon for instance, with neuromorphic chips,custom ASICs, and more. The world changes when the low-powered intelligence becomes prevalent around us. For example, an expensive chip coming with a pretrained ConvNet, a speech recognizer and a WaveNet speech synthesis network- everything integrated in a tiny protobrain that you can attach to the thing.
#3: Constant running time
Each and every iteration of a classic neural net forward pass takes exactly the similar amount of FLOPS. There is zero variability possible depending upon the different execution paths your codes could opt through some sprawling C++ code base. Certainly, you can have dynamic compute graphs but the execution flow is usually constrained. In this way, we are almost guaranteed to never find ourselves in some unintended infinite loop.
#4: Consistent Memory Usage
Similar to the above mentioned, there is no dynamic allocation of memory anywhere. Hence, there is little possibility of swapping to disk or memory leaks that you will have to hunt down in your code.
#5: Highly portable
A sequence of matrix multiplies is quite easy-peasy for running on arbitrary computational configurations in comparison to classical binaries or scripts.
#6: It is Agile
If you possess a C++ code and someone asks you to make it twice as fast( even at the cost of performance), it is going to be highly non-trivial to tune the system for new spec. However, in Software 2.0, we can pick our network, take off half of the channels, retrain and there is our end product; working exactly twice at the speed. It is magic, isn’t it?
Conversely, if you happen to retrieve more data, you can instantly make your program operate better by simply introducing more channels and retraining.
#7: Modules can patch into an optimized whole
A software is usually disintegrated into multiple modules that communicate through public functions, APIs or endpoints. However, if there are two separate Software 2.0 modules that were trained to interact separately, we can easily backpropagate through the whole. Just imagine how amazing it can get if your web browser could automatically re-design the low level system instructions. With Software 2.0, this is a default behaviour.
#8: Better than what we write
Last but not the least, a neural network is a better piece of code than anything you or I could come up with in a large fraction of valuable verticals, which presently involves the least amount of sound, speech, images or something else.
Downside of Software 2.0
Every coin has two sides, Software 2.0 is no exception. It too possess some disadvantages few of which are listed below:
#1: Difficult to Understand
In the end of optimization, we are left with huge networks that perform really well but is very difficult to tell how! Across different application arenas, we are going to end up with two choices- use a 90% accurate model that we understand or take help of 99% accurate model that we don’t.
#2: Prone to Failure
The 2.0 stack is capable of failing in most embarrassing, unintuitive ways. It is prone to a silent failure as well which is even worse. It can happen by the silent adoption of biases in the training data which are quite strenuous to analyze and examine, especially when their sizes are in millions, mostly.
Final Words
Software 2.0 will become increasingly important in any domain where repeated evaluation is feasible, cheap and where the algorithm itself is tough to design explicitly. In the longer run, the future of this paradigm is bright as it is quite clear to many that when we develop AGI, it will be written in Software 2.0 for sure.