

However, virtual humans have zero scandals to worry about." On September 10, Baek Seung Yeop, CEO of Sidus Studio X that created ' Rozy,' the newly rising blue-chip in the advertisement industry, explained, " These days, celebrities are sometimes withdrawn from dramas that they have been filming because of school violence scandals or bullying controversies. In particular, the use of virtual humans seems to be gaining more momentum in the COVID-19 pandemic, where there are many restrictions on travel and limitations on the number of people gathering. The solution, according to this new research, doesn’t seem to be that difficult and you can already be a part of it: Share this fresh cautionary tale that AI has limitations and overusing it can kill creativity and diversity of thought.With the advancement of technology, AI influencers and virtual human models are becoming the new trend. It has recently emerged as a blue-chip in the advertising industry because there are no privacy scandals and there are no time-space restrictions with these virtual humans. For media and entertainment firms, it can start with suboptimal content creation decisions that can then have adverse social outcomes. The overuse of AI can turn humans into borgs in the long run. As different groups separate in their collective thinking, they cannot appreciate different perspectives, and at one extreme, they live in alternative realities. Those who rely too much on news from social media platforms, which in turn rely too much on AI tools, can slowly become borgs, subject to the echo chambers of AI-enabled news feeds where diversity of thought is gradually lost. Oh well, isn’t this already happening in some circles? Group members, in turn, can become content with a consistent, self-indulging, AI-filtered message, which is reinforced by peers in the social circle. If the AI algorithm converges to certain types of personalized content for a group of individuals, it can lead to an echo chamber within this group. Gupta adds: “Essentially humans start mimicking AI and stop taxing their own brains, therefore they all act smart similarly like borgs”.Ī good example is over reliance by social media platforms on AI engines to power news feeds. Gupta and his colleagues show that over reliance on AI can lead to a decrease in the diversity of thinking, leading to suboptimal collective performance. When making such mistakes, a team of humans that uses an AI tool can perform worse than a team that doesn’t.Ĭollective intelligence emerges in humans and society when diverse minds that have access to different data sources come together to find solutions to problems, also known as the wisdom of crowds. As a consequence, they can end up relying on an AI tool even when it recommends the wrong path. Their first major finding is that humans are not very good at knowing when they should delegate decisions to AI. They developed experiments with a simple image classification task (identifying the breed of a dog) to see whether and how AI-supported decision making improved task performance. Gupta and his colleagues studied how humans and AI collaborate and complement each other to make decisions. To present the novelty of their findings, let me use scenarios in the context of media and entertainment, where creativity and innovation are critical and where losing unique human knowledge could have negative consequences. The research, published in late 2021, uncovers risks, consequences, and solutions to over reliance on AI in business and creative decisions. New behavioral experiments by Alok Gupta from University of Minnesota and Andreas Fügener, Jörn Grahl, and Wolfgang Ketter from University of Cologne in Germany bring a cautionary tale for current AI applications.
