AI Trends – The Greens, The Scams, and…Talent Hunt

by paradoxig

Few years ago, there were funny memes circulating over the internet related to how slow the development of AI is. Just to remember this one – searched in the archive of social media.

But nowadays, people are playing happily in the playgrounds of AIs with fancy tools appearing everyday. Midjourney is already ancient-history, compared with ChatGPT, that can write even your PhD; but also, we have AI video tools, AI Music, AI landing jobs tools, etc, the AI-party is rocking! It might be that part of history, when AI works for us, and we could get paid. or not? Because, some are ruining the party…

Recently the tech-gurus sent a signal: Stop AIs! An open letter signed by over 1000 tech leaders and AI experts, including Elon Musk or Apple cofounder Steve Wozniak, calling for a 6-month pause on creating powerful A.I in order to develop safety protocols. And then Stanford University recently unveiled its 2023 AI Index Report- an initiative by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) which is led by the AI Index Steering Committee. According to the institution, the report is meant to enable “decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.”

It is obvious that AI is around us, and the report has some alarming findings in 386 pages!
Some reflections, questions and takeaways from the report:

AI is Green or is NOT Green?
It turns out that AI has duplicity: it can be green, but it can be not-so-green.
According to Luccioni et al., 2022, BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco. Still, new reinforcement learning models like BCOOLER show that AI systems can be used to optimize energy usage.

The start of AI-scientists – Era
In the AI race, academia vs industry, industry leads the way. Building AI systems requires data, money and resources that industries have. As such in 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia.

If academia can’t build AIs, AIs are able to do scientific research. The report highlights that AI models accelerate scientific progress and have already been used to aid hydrogen fusion, improve the efficiency of matrix manipulation, and generate new antibodies.

Future of Toxic AI+Scams is Here

The number of incidents concerning the misuse of AI is rapidly rising: the number of AI incidents
and controversies have increased 26 times since 2012, warn the report. Some of those reported issues included a deepfake of Ukrainian President Volodymyr Zelenskyy surrendering, face recognition technology to try to track gang members and rate their risk, and surveillance technology to scan and determine emotional states of students in a classroom. And beside this, the bias issue remains in large models. Generative models have arrived with ethical problems. Text-to-image generators are routinely biased along gender dimensions, and chatbots like ChatGPT can be tricked into serving nefarious aims.

AI can build better AIs
As AIs become better and better, why not outperform humans also in building AIs. The report highlights that Nvidia used an AI reinforcement learning agent to improve the design of the chips that power AI systems. Similarly, Google recently used one of its language models, PaLM, to suggest ways to improve the very same model.

Companies want AI + Human-AITalents
While massively adopting AI, companies are also searching for AI-talents: There is an increase in job postings seeking AI skills across all sectors, and the number of AI job postings overall were notably higher in 2022.
In parallel, the proportion of companies adopting AI in 2022 has more than doubled since 2017, and reports realizing meaningful cost decreases and revenue increases.
Where and How? The report says that the AI capabilities most likely to have been embedded in businesses include robotic process automation (39%), computer vision (34%), NL text understanding (33%), and virtual agents (33%). Moreover, the most commonly adopted AI use case in 2022 was service operations optimization (24%), followed by the creation of new AI-based products (20%), customer segmentation (19%), customer service analytics (19%), and new AI-based enhancement of products (19%).

China loves AI. Feelings are mutual. US, EU: AI, scary!
According to the report, 78% of Chinese respondents (the highest proportion of surveyed countries) agreed with the statement that products and services using AI have more benefits than drawbacks. After Chinese respondents, those from Saudi Arabia (76%) and India (71%) felt the most positive about AI products. Only 35% of sampled Americans (among the lowest of surveyed countries) agreed that products and services using AI had more benefits than drawbacks.

Read the entire report here: https://aiindex.stanford.edu/report/

The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.

The AI Index collaborates with many different organizations to track progress in artificial intelligence. These organizations include: the Center for Security and Emerging Technology at Georgetown University, LinkedIn, NetBase Quid, Lightcast, and McKinsey. The 2023 report also features more self-collected data and original analysis than ever before. This year’s report included new analysis on foundation models, including their geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI. The AI Index also broadened its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023.

You may also like