The newest model of the GPT3 is out and taking the internet by storm.
Open AI’s latest tech, GPT-3 (Generative Pre-Trained Transformer 3) has been programmed to converse, answer follow up questions, admit mistakes and challenge incorrect statements. Open AI is the company that created the controversial text-to-image AI art generator Dall-E and Dall-E 2 earlier this year. In just five days the GPT-3 chatbot racked up over a million users (it took Netflix 3 years to get to this number, for context). If that isn’t impressive enough, it will supposedly reach over $1 billion in revenues by 2024.
Rather than use this intelligent chatbot tool to solve complex equations or find the meaning of life (it’s 42, fyi), or putting it to work helping people suffering with illnesses like Sickle Cell Disease, the internet has been using it to make jokes for the most part. A similar feat happened earlier this year with GPT-3’s cousin, the text-to-image generator Dall-E 2.
The AI can be used from everything from having nonchalant conversations, to solving equations or even writing essays. The Guardian had it write an entire article for them.
A revolutionary tech, or a lot of talk about nothing?
The tech works much like the human brain, using interconnected ‘neurons’ that can learn to identify patterns in data and make predictions about what should come next. However it is quick to point out it is not human and shouldn’t be confused for one - no doubt a bit of damage control by Open AI.
The chatbot was initially found to harbour some racist and prejudiced views (not unlike Dall-E and Dall-E 2), due to its dependency on the internet for its knowledge; when testing the AI while researching this article, the bias seems to have been straightened out.
Although Vice President of Gartner Ben Elliot argued the chatbot was nothing more than a parlour trick, it is only a prototype of the larger GPT-3 model, with a GPT-4 coming out possibly as soon as this year. It will no doubt offer a much larger AI brain than the current ‘parlor’ trick.
Fears have naturally arisen, everything from teachers fearing they will be unable to stop students cheating on homework, to fears of a Terminator/Blade Runner reality coming into fruition. The AI’s continually improving trajectory is impressive. It brings into question the future of jobs like call-centre workers - something that surely could be automated fully in the coming years. With a successful, productive AI of this nature working in tandem with companies, we could see literally millions of jobs become obsolete. Forbes debated whether GPT and similar tools create new cyber security risks, another serious concern. However the potential benefits for medicine are enormous.
How does this affect Sanius Health and victims of SCD?
It is no secret that the advance of technology and the advance of medicine go hand in hand. A breakthrough in technology will directly impact the evolution of modern medicine. Sanius Health allows patients to monitor their health in detail from home, rather than having to go all the way to hospital for a check up. The potential benefits of further developed tech is boundless, and the potential good that Sanius Health could do with tech like GPT is a very exciting prospect.
An AI such as GPT-3 could be used to analyse and interpret massive amounts of genomics data to identify new drug targets or treatments. Clinical data could be ingested at machine speed and discover new insights into the disease. The development of personalised medicine and treatment could also be dramatically developed with the help of the AI.
To a company like Sanius Health, already utilising technology to help their patients combat SCD and other illnesses, GPT-3 and similar technologies would be invaluable.
Risks of using AI to help battle Sickle Cell Disease
The way the GPT-3 Chatbot learned is through inhaling massive amounts of information from the internet. It is an unsupervised machine learning tool, meaning it can learn on its own without being explicitly programmed to do so. It does this by using an algorithm called Transformer architecture. This algorithm allows it to converse by guessing the next word in the sentence based on the context of the previous words.
Although this means it can learn far more, far quicker than a human, it also invites the possibility of learning false, biased, or even dangerous statements and opinions. In its own words:
“As a machine learning model, I am trained on a large dataset of text from the internet, which includes a wide range of information and perspectives. This includes content that may be offensive, racist, sexist, or otherwise harmful.”
Like anything new, there will be a learning curve. It is no secret that there is a history of systemic prejudice towards patients of SCD that often prevents patients getting the best treatment they can. GPT-3 cannot be expected to overcome these prejudices if humanity is still struggling with the issue.
Conclusion
A key benefit of the tech is its seamless ability to triage, research and analyse. A constant issue with the NHS and healthcare in general is a lack of unified organisation. Bureaucracy often gets in the way of progress, and simple roadblocks can often dramatically slow the tide of progress. This tech could revolutionise the way healthcare is structured.
AI such as GPT-3 will likely be very beneficial, but not without the researchers and healthcare professionals working together to effectively utilise them. The tools, as impressive as they are, rely on human collaboration to fully exploit the potential of these models.