US users more concerned about AI privacy than job loss | The Express Tribune

US users more concerned about AI privacy than job loss | The Express Tribune

Though the rapidly developing artificial intelligence (AI) technologies seemingly make our lives easier, the users of AI tools are more concerned about data privacy than the new technology replacing them in workplaces. 

The use of AI tools shape our work and social life, and bring with them privacy concerns.

While other concerns regarding job losses due to AI replacing human workers are on the rise, the effect of AI tools on personal privacy has become a hot topic.

A survey by the consulting firm KPMG showed that some 1,000 college-educated US consumers believe the benefits of AI technology outweigh the risks attached to using them.

Some 42% of the customers questioned said that generative AI tools have significantly impacted their personal lives, while the remaining 58% stated such applications shaped their professional lives, and 51% of the respondents expressed significant excitement over generative AI.

More than half of the participants in the KPMG survey believe that generative AI tools will enhance a wide range of issues, ranging from physical health to cybersecurity, from personalized recommendations to education.

However, the surveyed participants showed concerns over fake news and content, AI scams, data privacy, disinformation, and cybersecurity arising from the increased use of AI.

Among the participants, 51% expressed concerns over job losses due to AI replacing human workers.

As for opinions regarding federal regulations on AI development, 60% of Gen Z and Millennial respondents stated that they are currently “just right” or “too much.”

Additionally, 36% of Gen X and 15% of Boomers and Traditionalist participants agreed with the current government schemes on regulating AI development in the US.

– Biden administration’s executive order on AI

The Biden administration issued an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023.

The order, issued to protect Americans from potential risks of AI tools, required companies developing AI technologies to share security test results and other information with the US government.

In addition, new rules were introduced to protect people against fraud from AI-made content by implementing verification.

Meanwhile, the US Federal Trade Commission (FTC) launched a wide-range investigation into the ChatGPT-maker OpenAI last year for allegedly violating consumer protection laws.

The FTC launched an investigation into Alphabet, Amazon, Anthropic, Microsoft, and OpenAI’s generative AI investments and partnerships in January.

At the beginning of June, news in the US revealed that the Department of Justice would investigate chipmaker Nvidia for its role in the AI craze.

Scroll to Top