Losing Ourselves in Imitations
We will use this technology, because if we don't, we will fall behind. At the same time, the danger of losing control over our own lives is real, and it is imminent.
The other day I came across an ad from a new company that has developed an artificial intelligence solution which sends out emails and messages, responds to messages, and books sales meetings, posing as an actual human sales representative. They seem to be getting off to a good start and there is a lot of interest in the service. It's not surprising, as there's a lot to be gained from such automation.
This is just one example of an area where such solutions will gain a foothold and are already starting to gain it. Soon it will become common for people to let artificial intelligence not only write and answer emails and other messages in their own name, but also to imitate their own voice in phone calls and both their voice, face and body in remote meetings. This way it will be possible to multiply one's own output by copying oneself.
Imitations powered by Large Language Models are of a different nature than those we have known so far, because the models are both very capable of imitating the writing style, and for that matter the speech of individuals, and it is easy to feed them with enough information to make the imitation credible. At first, it can be expected that such imitations will be used primarily for work-related purposes, but that will change quickly. Before long, we can expect them to be used for personal communication also.
The problem is that those of us who use the technology in this way may soon lose the overview of exactly what has been said and done in our name and what information has been received by our AI imitations. But as an individual's self-image and the image others have of them are to a great extent based on communication with others and on the memory of this communication, the temptation to copy ourselves is probably one of the greatest threats we face from artificial intelligence. This temptation can indeed in a very short time cause us to lose control of our own lives, lose sight of who we are.
There is little or no chance that legislation or regulation will prevent this development. The technology is here to stay, the temptation to use it is too strong, the benefits too great for us to let it be. Those who manage to use these possibilities in a controlled way will fare better, just like those able to take advantage of other opportunities to improve their own skills and performance that artificial intelligence offers. But many more will be worse off, those who lose control over their own lives, gradually merging with their own imitations.
The large language models are fundamentally different from all previous technological innovations. This difference lies in their ability to use language and learn from and imitate actual people. The consequences are still only partly clear to most of us. We will use this technology, because if we don't, we will fall behind. At the same time, the danger of losing control over our own lives is real, and it is imminent.
It is only ourselves who can prevent this, each and every one of us. To do so, we must understand the technology and the opportunities and threats it brings. But that’s not enough. To stay in control, we must actively strengthen our own awareness of what matters to us and our own ability to think clearly. And we must start right away.
The next step is that those AI generated letters were read, and than deleted as Spam, by AI generated readers. So we fix a problem, generated by machines, with machines. Business as usual.