While Star Trek taught us differently, the appearance of artificial intelligence in our lives has been much more divisive than our forebears might have predicted.
A recent article from the Guardian newspaper even suggested that homo sapiens might go the way of Neanderthals, i.e. wiped out by the more intelligent species, in this case, AI. So, why do we continue to let it take over like so many tiny Terminators?
Contracts for Difference
The obvious answer is that AI is helpful. As recently as this decade, artificial intelligence has begun to break into one of the more sensitive areas of human life – finance and banking, suggesting that our trust in the concept is increasing quickly.
The idea of using AI to help with trading, specifically, Contracts for Difference (CFD) trading, isn’t all that alien. It’s just a question of sorting and analyzing data, something that machines have always been good at.
Consider that even an Excel spreadsheet can use an algorithm to sort things by alphabetical letter or highest and lowest values.
Contracts for Difference involve speculating on asset prices according to variables like market conditions and instrument prices. The difference between CFDs and regular stocks is that the CFD trader does not own the underlying asset.
The lack of ties between stock and trader means that CFDs allow for flexible portfolios that still take in familiar markets like the FTSE or the Nikkei 225, otherwise known as the Japanese Stock Exchange in Tokyo.
The Libertex trading platform lists more than 100 different stock CFDs, including machine learning advocates like Google, Intel, Nvidia, and Microsoft.
The art of finding patterns, trends, and influencers in these markets is central to the experience of the CFD trader – and, right or wrong, this information determines what actions a trader takes.
With such a huge volume of information to sift through, who wouldn’t want to train an AI to do their job for them, and automate this entire process?
So far, so good. So, what’s the problem?
To err is human. We all make mistakes. Ironically, as AI is trained by humans, it’s also far from infallible. Researchers have confirmed that the language model AI ChatGPT is getting dumber with each passing day, for instance.
In the four months between March and June 2023, ChatGPT’s ability to identify prime numbers fell from 97.6% accuracy to 2.4% accuracy. As many people would assume that math is an AI’s favorite thing, this might seem like a critical problem – and more so in areas dependent on data analysis, like trading.
A downgrade in AI performance is known as “drift”. It occurs when inputs from scientists are replaced with far more random inputs from users. A Microsoft chat AI – Tay – famously went mad in less than a day when exposed to Twitter.
While AI trading models are less dependent on human inputs, outputs are still informed by audience sentiments and/or crowd “buzz”. Relevant data is usually collated by machines in the “human” way, by scouring news sites and posts on social media.
This is also how language AIs like ChatGPT work, by scouring the internet for data snippets and pasting them together into a coherent answer.
An unfiltered mix of inaccurate human content and existing AI content online presents a bit of a problem for machine learning.
For instance, the idea of machines learning from machines has been described as an “ouroboros”, a snake eating itself until it is destroyed. Put another way, AIs make AIs dumber.
In another example that emphasizes the disconnect between human and machine understanding, an AI trained to fill out job application forms had a hit rate of .5% from 5,000 attempts. In this case, the results are acceptable (a .5% success rate produced 20 interviews with no consequences for failing) but a task like CFD trading may not be so forgiving.
Overall, AI is a bit of a question itself at present. Machines almost definitely have a role to play in human life – but releasing them into society at such an early stage in their development is bound to produce unexpected results.