pixel

URGENT ALERT: Please beware of fraudulent WhatsApp groups and other groups across Social Media pretending to be affiliated with Anchor and Anchor staff members. Do not engage with these malicious and fraudulent groups in any way. Please direct all queries to invest@anchorcapital.co.za.

The Story of AI – and two brilliant Englishmen

Our story starts in October 1950 when an Englishman and renowned World War 2 codebreaker, Alan Turing, spelt out his test for what we now call artificial intelligence (AI). He wrote to the effect that if a machine, irrespective of the method it used, could exhibit an intelligence like that of a human being, then it should be labelled intelligent. This simple requirement would later become known as the Turing test.

For decades computer scientists tried in vain to design machines that would pass the Turing test. They were preoccupied with designing systems based on clear rules and facts that were easier to programme – and cheaper to compute. These so-called classical algorithms, however, were not very useful in other less-rigid fields filled with ambiguity, like languages. With these inherent limitations, AI experienced a series of winters where researchers lost heart and limited progress was made. The first ‘AI winter’ was from 1974-1980, and the second soon followed from 1987-1994.

In 1958, Frank Rosenblatt from Cornell Aeronautical Laboratory in Buffalo, New York, however, had devised a novel approach. Using a giant five-ton IBM 704 computer, he demonstrated the ‘perceptron’, which could distinguish between punch cards either marked on the left or the right, using an approach described as a simple artificial neural network. This artificial neural network was inspired by how the human brain was thought to work with neurons (nodes) and synapses (numerical weights). But artificial neural networks did not take off as there was not enough computing power available. Until later….

With the second AI winter having recently ended, AI hit the limelight in 1997. Gary Kasparov, the world chess champion, had accepted the challenge to play six games of chess against IBM’s Deep Blue computer. Tied at one game all and three draws, Kasparov shocked the chess world by conceding defeat for the first time in his career in the sixth and final game. A machine had defeated the finest chess player in history. Kasparov said he ‘lost his fighting spirit’. But this still was not the modern AI of today that has captured our imagination. Like the classical algorithms mentioned earlier, Deep Blue ‘relied mainly on a programmed understanding of chess’. There was more to come.

Geoffrey Hinton is our second brilliant Englishman. As a student at Cambridge, he had repeatedly changed his degree between different subjects before graduating with a Bachelor of Arts in experimental psychology. Hinton began working on neural networks in the late 1970s and early 1980s when the field was largely left for dead. But, like Turing, Hinton felt that the ‘whole idea was to have a learning device that learns like the brain. And that was not my idea. Turing had the same idea and thought that was the best route to intelligence.’

Hinton wrote a paper in 1986, along with two others, on ‘learning representations by back-propagating errors’. Hinton et al. were not the first to come up with back-propagating errors. Frank Rosenblatt had also used the term but did not know how to actually do it. But this important paper popularised the concept of artificial neural networks.

It was in Canada, not the US, where the newfound field of neural networks was to move from idea to invention. The Canadian Institute for Advanced Research (CIFAR), a government-funded ‘university without walls’, provided Hinton with a home to pursue his rudimentary ideas on neural networks. Others soon joined, and before long, CIFAR’s new neural network division (NCAP) was the hotbed of research on AI. The large technology companies were soon to take notice.

In 2012, Alex Krizhevsky, a PhD student in Canada, in collaboration with another student, Ilya Sutskever, and his PhD supervisor, Hinton, entered the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). The ImageNet dataset consisted of over 1mn images, and the challenge was ‘to evaluate algorithms designed for large-scale object detection and image classification.’ This was no easy task. The three entered the competition using an unconventional approach – an artificial neural network that Krizhevsky designed. They used two GPUs (graphics processing units) made by a company called Nvidia to speed up the computing process. AlexNet, as the neural network was later named, thrashed the other competitors and won the Challenge. At this point, it became clear that deep learning using large neural networks was the way forward for AI. Google soon hired all three of them, and the other large tech firms began to acquire deep learning start-ups and other teams of researchers.

2012 was also the year that AI really found Nvidia. Since its founding in 1993, Nvidia was known as a provider of 3D graphics chips for computer games. These chips, known as GPUs, were designed for specific and repetitive tasks like accelerating the rendering of images on a screen – a memory-intensive process. GPUs are more efficient at doing this type of parallel processing than CPUs (central processing units) built for speed, i.e., with low latency. Tim Dettmers explains that GPUs are bandwidth optimised while CPUs are latency optimised. GPUs are like a truck – slow but can carry a lot – while CPUs are like a Ferrari – fast but cannot carry much.

But before the GPUs of old would be of any use in an AI challenge like ImageNet, they needed to be adapted for broader use than 3D graphics. Nvidia had already started on this journey when it introduced Cuda, a new programming language, in 2006. Cuda allowed GPUs to be re-programmed for other purposes, not just graphics. At the time, the company’s co-founder and CEO, Jensen Huang, presumably felt that his GPUs had more to offer in solving more challenging computing problems. He was ambitious. CPUs were also running against limitations with Moore’s Law – where the number of transistors was expected to double approximately every two years. Huang tried other things that failed – like cellphone chips – but his somewhat crazy move to introduce Cuda opened a whole new world to Nvidia’s GPUs. People working on neural networks soon cottoned on and used GPUs in 2007. Introducing reprogrammable GPUs meant the right computing power had arrived to make AlexNet’s neural network possible. From 2012 onward, Nvidia and the AI community worked more closely together.

Before we move on, it is important to understand that GPUs are vitally important in the first leg of deep neural networks – training. This is the process where a computer sifts through a large dataset and learns ‘how to analyse a predetermined set of data and make predictions about what it means.’ It involves trial and error using artificial neurons. It mimics how a brain works and eventually draws accurate conclusions. The process is memory intensive, and Nvidia’s GPUs dominate this market. OpenAI’s large language model, GPT-3, had 175bn parameters that needed to be trained. GPT-4, released in 2023, is estimated to be 1000x bigger, with some 170trn parameters. Once the model has been trained, a task that preferably takes one month or less, the model is ready for inference. With inference, a chatbot will receive a query, and the model can then predict the specific answer. CPUs currently dominate the inference market, but Jensen Huang believes that GPUs will become a more energy-efficient option here. So training is like going to school and getting an education. Inference is like applying that education once you have a job.

Only when DeepMind’s Alpha Go machine defeated the Go champion, Lee Sedol, in 2016 did modern AI really show up in board games. Unlike chess, which only has 400 possible moves after the first two moves, Go has close to 130,000 next moves after the first two. Alpha Go’s intelligence relied on two neural networks and a game tree search procedure. ‘In the 37th move in the second game, AlphaGo made a very surprising decision. A European Go champion said, “It’s not a human move. I’ve never seen a human play this move. So beautiful.”’

OpenAI is where we close our history of AI. In late 2015, at the end of an AI conference in Montreal, Sam Altman, from tech incubator Y-Combinator, and Elon Musk unveiled a new AI business, OpenAI. With guidance from Yoshua Bengio, an academic at Montreal University and one of the founding fathers of deep learning along with Hinton and Yann LeCun, OpenAI attracted the top AI researchers in the industry. This included people like Ilya Sutskever, from AlexNet fame. Despite receiving far higher offers from other tech companies, the elite researchers that joined were drawn to the mission of OpenAI – a non-profit at the start – of advancing AI for the benefit of humanity and sharing their work with the public (sharing has now been retracted somewhat).

Since then, OpenAI has issued various research papers, as is the norm in the industry, and unveiled a series of deep-learning generative AI products that have captured the world’s imagination. These products include ChatGPT, a generative AI chatbot that works off (i.e., inference) a large language model (LLM) called GPT that has been trained (training) on vast internet datasets, and Dall-E, which generates digital images. Which deep learning models are they using? As The Information reports, ‘almost all generative AI models, including … ChatGPT, are based on transformers’, a Google invention. A transformer is a very efficient deep-learning model that was first described in a 2017 research paper written by Google researchers titled ‘Attention is all you need.’

In 2019, Microsoft made an initial US$1bn investment in the three-year-old start-up and has since invested an estimated additional US$12bn. To his credit, Satya Nadella, the CEO of Microsoft, has driven this initiative despite initial reluctance from Bill Gates and members of Microsoft’s vast in-house research team that had been working on AI since 2009 – with limited success. Amongst the large tech companies, Microsoft is now regarded as the leader in generative AI and is using this caché to draw more customers to its cloud computing division, Azure. Microsoft now owns 49% of OpenAI in what is, for many observers, a somewhat awkward relationship. OpenAI, an independent company, needs the computing power of Microsoft but is understandably very cautious about the roll-out of AI and, probably, about being too tied to this tech behemoth. Conversely, for now, Microsoft needs the expertise of OpenAI but is aggressively rolling out OpenAI tech features across its Microsoft product suite. There are contradictions here. Time will tell.

In the meantime, interest in other AI start-ups has also soared. Anthropic, Stability AI, Cohere, Hugging Face, Runway and many others are raising large amounts of money, primarily to access the computing power required for running LLMs. The spending on ‘picks and shovels’ to keep the machines whirring is also rocketing. Nvidia is the new Cisco. It feels like 1999!

Where has modern AI come from, and where is it going? The first major development in modern AI was image recognition (from 2012) – pictures and faces. Ian Buck of Nvidia talks about this AI period as the era of Recognition. He now says we have moved into the era of Generation. Generation started small with Google’s BERT (Bidirectional Encoder Representations from Transformers) language models (in 2018), but it has now blossomed into OpenAI’s wide-ranging latest large language model, GPT-4. LLMs will improve and become more efficient at training. Different specialisations are also developing as LLMs move into healthcare, finance, law, and other sectors. LLMs will also be adapted to work on smaller devices like smartphones. Google recently ran its latest LLM, PaLM2, on a Samsung Galaxy handset. Rudimentary apps running off smaller models with between 1bn and 10bn parameters are what Ben Bajarin from Creative Strategies is predicting. The key is to make generative AI cheaper to run so it is accessible to more users on more devices.

What comes after the era of Generation? The experts predict that Reasoning will be next. One of the limitations of LLMs is that they do not do chains of reasoning very well – but my understanding is that this is being worked on using chain-of-thought (CoT) reasoning etc.

In a September 2022 interview, Ray Kurzweil, the computer scientist and respected AI futurist, reiterated his view that AI will finally pass the Turing test in 2029. He has held this view for many years, while other AI experts believed he was too optimistic. They have recently begun to agree with him, and that was before the arrival of GPT-4. Kurzweil uses a stricter definition of the Turing test, where a LLM is tested over several hours to judge whether it displays human-like intelligence. Interestingly, he believes that once a LLM passes the test, it is conscious.

Kurzweil goes on to say humans can then connect their neo-cortex, the section of the brain where we do our thinking, to AI via the cloud. This is like connecting a smartphone (ala brain) to the internet, which has made the smartphone a much smarter device. In humans, there are some rudimentary moves in this direction with companies like Neuralink, founded in 2016 by Elon Musk and others. Kurzweil expects this to happen in the 2030s. In other words, humans will merge with AI, amplifying our brains, he thinks.

The end game with AI appears to be achieving artificial general intelligence (AGI), a step up from generative AI. We are not there yet. A seasoned AI investor, Ian Hogarth, wrote a chilling essay on AGI in the Financial Times in April 2023. He has another name for it, God-like AI. He writes, ‘AGI can be defined in many ways but usually refers to a computer system capable of generating new scientific knowledge and performing any task humans can.’ The risk with this type of God-like superintelligent computer is that, he continues, it ‘understands its environment without the need for supervision and …can transform the world around it.’

How do we ensure that AI is safe for humans and the planet? Using AI speak, how do we ensure the goals of AI ‘align’ with human values? This is causing great distress for many elders in the industry, like Hinton and Bengio. With the move out of academia into industry, the AI genie is out of the bottle. How do you contain something Bill Gates has described as the most important development since personal computers? To paraphrase Henry Kissinger et al., human intelligence is meeting artificial intelligence. It will take a global effort to define our relationship with artificial intelligence and the resulting reality.

Suppose AI proves to be similarly disruptive to the global economy as other momentous new technologies, such as the introduction of personal computers and the birth of the internet. In that case, we should expect the usual short-term pain for long-term gain. Specific categories of jobs may become redundant in the near term while new types of jobs will be created. This process has repeated over the centuries whenever a novel new technology emerges. The novel technology then spurs the launch of a cluster of related technologies. The technology cluster sweeps across the economy, gradually changing the economy and society. After the initial disturbance, productivity growth typically accelerates, bringing long-term benefits to society. It may be too early to say if AI will be any different.

Anchor will host a webinar on The Story of AI – and two brilliant Englishmen, with Anchor Head of Private Capital Deon Katz hosting the event and Anchor Fund Manager David Gibb taking the audience on a journey through the past, present and future of AI. 

The webinar will be held on Friday, 21 July 2023, from 10 AM-11:30 AM. You can register for the webinar by clicking here.

At Anchor, our clients come first. Our dedicated Anchor team of investment professionals are experts in devising investment strategies and generating financial wealth for our clients by offering a broad range of local and global investment solutions and structures to build your financial portfolio. These investment solutions also include asset management, access to hedge fundspersonal share portfoliosunit trusts, and pension fund products. In addition, our skillset provides our clients with access to various local and global investment solutions. Please provide your contact details here, and one of our trusted financial advisors will contact you. 

OUR LATEST NEWS AND RESEARCH

INVESTING IN YOUR NEEDS

Submit your details and we’ll give you a call back to assist and advise you on your investment.

SUBSCRIBE TO OUR NEWSLETTERS

Subscribe to our newsletters to receive regular market commentary, research and updates from the Anchor team. Select between our Individual or Financial Advisor newsletters by selecting the relevant tab below.

WEBINAR | The Navigator – Anchor’s Strategy and Asset Allocation, 2Q24

Anchor CEO and Co-CIO Peter Armitage will host the webinar, provide an introduction to current global and local market conditions and give his thoughts on offshore equities. Together with Head of Fixed Income and Co-CIO Nolan Wapenaar, Pete will also discuss Anchor’s strategy and asset allocation for 2Q24, focusing on global equities and bonds. In addition, Fund Manager Liam Hechter will provide insights into local equities, highlighting some investment ideas; Global Equities Analyst James Bennet will discuss Ferrari and give an update on Tesla, and finally, Analyst Thomas Hendricks will participate in a Q&A with Peter, explaining the 10-year US Treasury to attendees.