I can see both the threat and the potential in AI...
One of the first uses of AI was with IBM’s supercomputer over 2 decades ago (Deep Blue or something like that...) where it was fed medical files from thousands of cancer patients, along with published scientific papers on developing diagnosis, treatments and medical advances. A group of highly respected doctors first used the feedback to compare their recommendations to past instances compared to what the AI suggested and saw that in a significant number of cases AI gave the same diagnosis as they had.
They then developed this so that the "success" rate of AI diagnosis improved to a comparable level as they had offered on these past cases.
When they then experimented with using new, undiagnosed cases they saw that AI sometimes gave a different result to what these specialist doctors had diagnosed. When they looked into the cause of this, then it turns out that the doctors were not capable of keeping up-to-date on all new published papers with new info, new treatments and so on. The AI recommendations were basically built on better info than they had, and once they read the same papers they agreed with the AI result.
At that time – over 20 years ago – the team of doctors used AI to "second guess" their results: When a team had gone through the data and given their diagnosis, they would confirm it with AI. If AI gave a different result, they would assign it to someone that would examine why. Often it was because of newly published data, that if the doctors had been aware of would have impacted their decision too.
But that was a team of highly educated, respected and experienced doctors, along with one of the most powerful computers of it’s time, along with a powerhouse company of programmers and specialists to customize the whole system to their advantage.
--
Then we have what’s available to us the common people...
I read an article from a prominent professor in ethics and philosophy. A survey indicated that the quality of essays and papers of college students has dropped with AI. Both in accuracy and also in language used.
Keep in mind how (a lot of) AI works: It searches the net for what is commonly perceived as factual. Like if there are only 5 sites or references to the Apolo moon-landings and 2 of them were conspiracy-based sites, then AI would probably answer questions on the subject with something like "Most think man has walked on the Moon, but many doubt it", when factually it’s probably 99.9% that believe it happened, and 0.01 that doubt it.
It then bases it’s answers on the texts it finds. When students were using edited paper books or edited scientific papers the wording was based on the edited content. Now when anyone can publish and editing, proof-reading, fact-checking, references etc are basically optional the quality of the replies AI finds drop... When a student then copy/pastes the result they are using inferior material compared to the text that was proof-read, edited and verified.
-
I experimented with AI a few weeks ago. I asked AI what plastic storage boxes available at a certain retailer were best to store LP records (anyone remember them?). Got a very specific reply along with the item number for a certain container. When I tried to put my old LP collection into one, it was too small by ¾ of an inch.
My mistake was that I asked for storage of LP records... Not for LP records in the envelope/sleeve/album they come in. AI didn’t seem to question or know that LP’s are seldom stored "naked" or exposed.
You need to be quite detailed and specific in your questions.