Older adults want tech companies to focus more on their needs.
You are here
Regulation of AI -- will it happen in the US (seems unlikely). Should it?
The past few weeks have brought broad-based wails of AI anxiety. Last week in a meeting with some senior execs, there it was again – warning about scams, exploitation and worse. Then there is the AI laundry list of anxieties that keeps the media busy. Note the hearings in Congress that raise questions and obtain carefully worded answers meant to allay widely shared fears. A regulatory framework has emerged (2 years ago) in Europe, with the purpose of helping Europe become a hub for AI innovation. The US government is interested in regulation, more for prevention, it seems, than to spawn innovation. But will it work? See social media.
Congressional efforts have apparently given up on social media regulation. Consider this past week’s news about Instagram and its vast network of child sex pedophile providers. Consider that Meta, Instagram’s parent, quickly created a task force to produce better internal controls, presumably to generate (little) confidence and avoid external (government) controls. Let’s remember The Facebook Files, in which company execs admitted that they knew but did not control the Instagram content that was harmful for teenage girls – though Instagram was implicated as the cause of death of a teenage girl in the UK. And as the interest in regulating TikTok has risen, Senator Amy Klobuchar noted that the tech lobby is so powerful that bills with "strong, bipartisan support" can fall apart "within 24 hours." So to appear that they were trying, they agreed to ban TikTok on government devices, where it was/was not a problem.
Following the AI hearings, apparently THIS time things are different. Clearly AI has generated a far higher level of media attention and interest than social media ever did, despite well-publicized negative impact and harm. And AI execs (see recent hearings) seem interested in being helpful, particularly in assisting with creating regulatory frameworks to prevent much-discussed ‘harm.’ Will anything happen? Following $70 million spent in 2022 on lobbying by Amazon, Apple, Google, Meta and Microsoft, what changed? Maybe 2023 is different? So far this year, at least $94 million has been spent on lobbying, 123 companies have traveled to DC. The senators were critical in their comment but accepted the responses with little further pushback. Asked if he was taking a salary at OpenAI, Altman said he was not paid. Ah, but his net worth is estimated at $500 million, so lack of a paycheck is not a problem.
But maybe the user warnings in the software will be helpful. ChatGPT now displays its 'limitations' before you ask a question and follows up a lengthy answer about health risks that matter to older adults with recommendations-- consider a range of factors, get advice from medical professionals, and other qualifiers. But then a surprise! On an Android device, the answer about health risks was followed by a new offer to import your browsing data from Google. (Say no). Consider the significance of that from a privacy perspective, about which OpenAI is very concerned, and so it published its data collection policies. Does ChatGPT explain WHY you should import your browsing data from Google or what might be done with it, besides improve answers to your questions?
Check out the new report, The Future of AI and Older Adults 2023 to learn more about AI and older adults.