Notebook: AI, Generative AI, Large Language Models

This post is a gathering place for information on, discussion of, and thoughts about the quickly proliferating crop of Generative “AI” tools and toys which have been making headlines and headaches for the past while. I am currently creating and populating the categories, which will likely be an ongoing process.

Notes

  • Large Language Models are not AI per se.
  • There is a large push for GenAI/LLM data sets to be “opt in,” e.g. ,these tools may not use any content ion their training sets unless the creator of the content has given their explicit permission for the content to be included in the training data. On its face this is good and the correct way to approach the issue of copyright. This will be difficult for the owners of the LLM tools, as they will essentially need to purge all of their training data and start over from scratch. But since the LLM tool owners have taken a “beg for forgiveness rather than ask permission” approach, we can minimize our sympathy for their plight.One of the unintended consequences of opt-in only, which will inevitably make the LLMs terrible to the point of uselessness, is that while principled people will demand copyright protection, unprincipled people will create hate speech, disinformation, and authoritarian propaganda, and poison the pool of allowable content even more than it already has been by the unguided training in the first new generations of LLM training sets.
  • One of the more subtle dangers of the adoption of AI/LLM tools in business is that inevitably the LLM tool, rather than the human employee, will be seen as the de facto “expert” in whatever context the tools is used. Then the worth of the human employee will be measured by how well they can coax the LLM tool to provide the “correct” solution to whatever problem the LLM used to solve. This is already happening, if we look at all of the courses available which train humans in writing queries for LLM tools. On the one hand, being able to ask the right question is a useful skill, but when used against an imperfect tool (and literally all LLMs are imperfect tools), if the blame for a wrong answer falls on the user for not being good at answering questions, then inevitable, thanks to capitalism, the humans will be discarded in favor of the less useful and also less expensive LLM tools.
  • On a societal level, the perception of what “AI” is, and what it can do, is just as dangerous as the actuality of what “AI” is and what it can do.

Technologies

Sources

Government, Policy

Business

Culture, The Arts

Science, Technology

Copyright, etc

Detecting AI-generated content

Discussion

Unsorted Links