We’re listening to so much in regards to the risks of generative AI like ChatGPT, the brand new Microsoft Bing and Google Bard. For instance, Geoffrey Hinton, who’s sometimes called the “godfather of AI,” lately give up his job at Google so he may converse freely in regards to the dangers of AI being developed by Google and different firms.
He informed PBS News Hour that he worries about “the risk of super intelligent AI taking over control from people.” Numerous tech consultants, together with Elon Musk, have referred to as for a pause within the growth of highly effective generative AI fashions whereas we discover their dangers and methods to make them safer.
There’s cause for warning. If left unchecked and unregulated, there are all types of dangerous issues that might occur, together with AI techniques evolving in methods that may hurt people and, extra doubtless, empower dangerous actors to perpetuate scams and different crimes, generate and disseminate disinformation, defame individuals, spew hate, plan insurrections and a lot extra. It can be biased and might put a lot of individuals out of labor. But, together with the doable dangers, come some security and safety advantages.
A bit assist from ChatGPT
In the spirit of full disclosure, I had a little bit assist with this column by asking ChatGPT two questions. One was “how can generative AI make you safer?” and the opposite was “how can generative AI make social media safer?” I requested the second query primarily based on conversations I’ve had lately with social media security consultants who’re bullish on how the expertise can be utilized to assist average their providers. Except for what’s in quotes, the phrases on this column are my very own as are a lot of the options. ChatGPT was useful, and I make no apologies for making the most of what, for me, has turn into a analysis device and writing support very like on-line serps and spelling and grammar checkers.
How AI could make us safer
When it involves safety, ChatGPT appropriately identified that “Generative AI can identify vulnerabilities, strengthen passwords, and develop countermeasures against cyber threats, thereby improving the overall security of networks and systems.”
It can even “analyze video surveillance data to identify potential threats,” though there are apparent privateness implications there.
AI can, and already is, getting used to research patterns and behaviors to determine “anomalies, such as unusual transactions to identify potential fraud.” It might help predict and get better from pure disasters, can be utilized in well being care to watch sufferers, make personalised well being suggestions, and analyze photographs and different well being checks to higher diagnose circumstances and suggest therapy.
A number one safety knowledgeable, who requested to not be quoted by title, informed me, “AI will turbocharge pre-existing traits. Bad actors will give you new vulnerabilities, however the instruments profit the defenders simply as a lot and perhaps extra.
It’s additionally already getting used to make autos safer. Despite its limitations and deceptive title, Tesla’s so-called “Full Self Driving” makes use of AI to foretell and keep away from accidents, and Elon Musk, who has expressed considerations in regards to the risks of AI, has lately mentioned that Tesla plans to make use of “end-to-end AI” to enhance car autonomy and security.
And, as ChatGPT jogged my memory, “Generative AI can be used to create smart home security systems that learn occupants’ routines and detect unusual activities, alerting homeowners to potential threats.” Of course, it may be utilized by criminals to foretell when householders will probably be away and higher plan their crimes.
Social media firms are utilizing AI to detect and take away spam, determine misinformation, and assist forestall customers from being focused by malicious actors on social media. Industry insiders I’ve spoken with are optimistic that advances in AI will enormously enhance their means to police their providers.
It can be used to guard customers from publicity to dangerous and inappropriate content material. It can be utilized to flag hate speech and cyberbullying and may be educated to know a person’s specific triggers and different vulnerabilities to keep away from exposing them to issues that might hurt or upset them. Because it may possibly know who’s utilizing it, it may forestall kids from seeing sexual or violent content material whereas permitting entry for adults.
It can be used to determine unlawful content material corresponding to little one intercourse abuse photographs or content material that encourages self-harm.
It may additionally be used to foretell dangerous habits corresponding to analyzing social media posts to foretell who may inflict hurt on others or themselves, though this, too, raises some privateness considerations in addition to the chance of profiling.
Having mentioned that, generative AI can be used to create harmful or inappropriate content material together with digital little one abuse photographs, spam, hate speech, doctored pictures, movies and audio and different types of misinformation. These dangers have turn into a serious subject at gatherings of kid security consultants.
Double-edged sword
AI is a double-edged sword, however so are nearly all applied sciences together with the wheel, hearth, kitchen knives, vehicles and even space rugs that create journey hazards. On stability, I’m enthusiastic about the best way AI can enhance our lives and make use safer and more healthy. But that doesn’t preserve me from worrying about how it may be misused.
As we enterprise ahead into the subsequent paradigm shift in computing and the acquisition of information, we should be cautious, however we additionally must keep away from ethical panics.
Disclosure: Larry Magid is CEO of ConnectSafely.org, a nonprofit web security group that receives monetary assist from each Google and Facebook.
Source: www.bostonherald.com”