Among other things, let yourself expect to see better language understanding and an AI boom in China.
Year 2016 was massive for advancements in artificial intelligence and machine learning. But the current year may well deliver even more. Here are 5 important things to look forward to.
Positive reinforcement (intelligence)
AlphaGo’s historic victory opposing one of the best Go players of all time, Lee Sedol, was really a landmark for the field of AI, and for the technique known as deep reinforcement learning as well.
Reinforcement learning is inspired by the ways that animals learn how certain behaviors tend to consequence in a positive or negative outcome. With this approach, a computer can, say, figure out the way to navigate a maze by trial and error and then associate the positive result with the actions that led up to it while the maze is exiting. This enables a machine to learn without instruction or even explicit examples. The idea has been around for decades in the past, but combination of this very idea with large (or deep) neural networks provides the power required to make it work on really complex problems (just like the game of Go). Through relentless way of doing experiments, as well as analysis of last games, AlphaGo figured out for itself how to play the game as an expert.
Hopefully, the reinforcement learning will now prove itself to be useful in many real-world situations. And the recent release of numerous simulated environments should spur progress on the necessary algorithms by augmenting the range of skills computers can acquire in this way.
In the current year, we are likely to behold attempts to apply reinforcement learning to dilemmas such as automated driving and industrial robotics. Google has already boasted of utilising deep reinforcement learning to make its data centers work more efficiently. But the approach we are talking about, remains experimental, and it still needs time-consuming simulation, so it will be interesting point to see how effectively it can be installed.
Dueling neural networks
At the banner AI academic gathering that held recently in Barcelona, the Neural Information Processing Systems conference, much of the discussion was about a new machine-learning technique that is being called generative adversarial networks.
GANs were invented by Ian Goodfellow, who is now a research scientist at OpenAI, generative adversarial networks or GANs, are systems that consist of one network that makes new data after learning from a training set, and another one that discriminates between real and fake data. By working in collaboration, these networks can generate very realistic synthetic data. The approach could be used to produce or generate video-game scenery, de-blur pixelated video footage, or apply some stylistic changes to computer-generated designs as well.
Yoshua Bengio known to be one of the world’s leading experts on machine learning , said at NIPS that the approach is basically exciting because it gives a powerful way for computers to learn from unlabeled data—something many believe may have the key to make computers a lot more intelligent in the future.
China’s AI boom
2017 may also be the year in which China begins looking like a major player in the realm of AI. The China’s tech industry is shifting away from following Western companies, having identified AI and machine learning as the upcoming big arenas of innovation.
China’s supreme search company, Baidu, has had an AI-focused laboratory for some time, and it is getting the rewards in terms of improvements in techs like voice recognition and natural language processing, and a better-optimized advertising business. Other players are now struggling to catch up. Tencent, which offers the massively successful mobile-first messaging and networking application WeChat, have opened an AI laboratory previous year, and the company remained busy recruiting talent at NIPS. Didi, the ride-sharing giant that strategically bought Uber’s Chinese operations earlier this year, is also building out a laboratory and working on its own driverless cars, reportedly.
Now, Chinese investors are pouring money into AI-focused startups, and the Chinese govt has signaled a wish to see the country’s AI industry blossom, pledging to invest about $15 bn by the upcoming year.
Language learning
Ask Artificial Intelligence researchers what their upcoming big target is, and they are hopefully to mention language. It is hoped that techniques that have produced spectacular progress in image and voice recognition, among other arenas, may people also help computers parse and produce language more effectively.
This is a long-standing target in artificial intelligence, and the prospect of computers communicating and interacting with us using language is a charming one. Better language understanding would make machines a completeb lot more beneficial.
Don’t anticipate to get into deep and meaningful conversation with your smartphone for a moment. But some high quality inroads are being made, and you can anticipate further advancements in this arena in this year.
Backlash to the hype
As genuine steps forward and exciting new apps, the year 2016 saw the hype surrounding artificial intelligence hits heady new heights. While several have faith in the underlying value of technologies being developed now, it’s tough to escape the feeling that the publicity surrounding AI is going out of hand.
Some AI researchers are irritated evidently. A launch party was organized during NIPS for a fake AI startup. The fake artificial intelligence star up was called Rocket AI, with a purpose to highlight the growing mania and nonsense around real AI research. The deception was not much convincing, but it was a joyous way to attract a genuine problem.
One problem is that hype inevitably leads to disappointment when big breakthroughs don’t happen, that cause overvalued startups to fail causing investment to dry up. Perhaps the current will feature some sort of backlash opposing the AI hype machine—and maybe that would not be such a bad thing to say.