Thursday, August 07, 2025

Miscellaneous AI-related questions

Question mark with AI-sparkle

No answers. "Just", questions I'm aware of and considering:

  • If working with AI means communicating with machines more like we do with other humans, how do we avoid things also going back the other way and treating people more like machines?
  • Are "agents the future of [all] work"? And, if not all, how to identify the work that can change or be replaced?
  • If "AI is only as good as your data", why isn't there as much effort being put into ensuring the quality and accuracy of the data as there is hype about AI?
  • At what point does AI not need human oversight? All the education highlights human oversight, but the futurists don't include it...
  • What is in the middle-ground between traditional GUIs and "just" a text box?
  • As feedback is highlighted as being essential when developing tools with AI, is there a way for feedback from a tool to be passed back to those creating the underlying models?
  • If there's a GUI for something, does it automatically need (and benefit?) from having an equivalent interface that's accessible via command line, API, and Agent/MCP?
  • As speed/rate of change is a common complaint among all types of people and people doing disparate tasks, how do you factor this in when introducing AI-powered tools?
  • If people are generally reluctant to read instructions, why will they happily read the text-based response from an AI tool telling them how to do something?
  • Asking good questions is hard. How people ask questions of AI-powered tools greatly impacts the quality of results. In training people to use AI, are they also being taught to ask good questions?