Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: AI roundup: Of workslop and rework | Patent that tech, industry player in Simple Termsand what it means for users..
Across industries, more than a third of the time saved by AI gets eaten by overstretched humans who have to clean up the technology’s sloppy work.
The going term for that brand of lousy job is AI workslop, and it often comes up alongside another newly coined term—AI rework. Case in point: A new study shows around 37% of AI’s time savings is offset by rework. The survey portion of the research was conducted in November by Hanover Research on behalf of the cloud computing and enterprise software company Workday. “Employees report spending significant time correcting, clarifying or rewriting low-quality AI-generated content—essentially creating an AI tax on productivity,” Workday analysts note in a survey report released Jan. 13. “For every 10 hours of efficiency gained through AI, nearly four hours are lost to fixing its output.”
The study was well-designed, requiring survey respondents to be currently using or personally exposed to AI as part of their daily duties. The survey instrument brought in survey responses from 3,200 individuals working around the world. Half were leaders (director-level and above) and half were employees (manager-level and below). Leaders had to have some measure of influence over the AI strategy decisions for their organization.
5 highlights from the report:
- Every year, organizations lose a week and a half per AI-engaged employee. That’s how much time these valuable team members spend reworking flawed AI outputs—about the equivalent of an extended vacation. “This hidden loss highlights a critical blind spot in how organizations assess AI performance,” the Workday analysts comment. “Most leaders focus on gross efficiency—how much time AI saves. But this metric alone obscures the real picture. When time lost to rework is taken into account, the net value of AI is often much lower than expected.”
- While two-thirds of leaders (66%) cite skills training as a top investment priority, that investment is not consistently reaching the employees most exposed to rework. Among employees who use AI the most, only 37% report increased access to training. That’s a nearly 30-point gap between stated intent and lived experience, the authors point out. “As a result,” they add, “many employees are expected to produce higher-quality outcomes with AI without the guidance or support needed to do so efficiently.”
- For employees already doing a large share of rework, outdated role definitions make it harder to capture AI’s benefits.“Without clear expectations for how AI should be used—and where human judgment must apply—employees default to verification and correction, absorbing the cost of low-quality output themselves,” the Workday analysts write.
- Employees aged 25 to 34 emerge as a consistent hotspot for AI-related rework. “While often assumed to adapt most easily to new technologies, this group accounts for nearly half (46%) of employees experiencing the highest levels of verification and correction of AI output.”
- Human resources bears a disproportionate share of the rework burden. HR professionals represent the largest share (38%) of employees experiencing the highest levels of AI-related rework, Workday found. “Their work involves people decisions, communications and compliance-sensitive processes, where ‘good enough’ output is rarely acceptable,” the authors remark. “As a result, HR teams audit AI-generated work with exceptional rigor, absorbing the time cost required to ensure accuracy, tone and fairness.”
Here’s some free legal advice for digital product developers: If your technologies are worth presenting to healthcare providers, they’re worth protecting with proactive patents.
Or, as stated by experienced lawyers at the international, 184-year-old, Milwaukee-headquartered firm Foley & Lardner: “In today’s AI-enabled healthcare market, patents are not optional legal artifacts. They are a strategic business tool that protects enterprise valuation, strengthens defensibility, shapes competitive leverage and reduces downside exposure.” Partner Aaron Maguregui, JD, and senior counsel Matthew Horton, JD, build their case in commentary posted at the firm’s website Jan. 14. Here are some of their key points.
- Many digital health companies still treat securing patents as a future consideration—something to address after product-market fit or the next financing round. “That approach increasingly creates risk,” Maguregui and Horton write. “In today’s AI-enabled healthcare market, patents are not optional legal artifacts. They are a strategic business tool that protects enterprise valuation, strengthens defensibility, shapes competitive leverage and reduces downside exposure.”
- Well-designed digital health patents protect system-level functionality. “This includes how data is ingested and normalized, how models are trained or fine-tuned in regulated environments, how outputs are validated or constrained, and how decisions are operationalized in clinical settings,” the authors write. “These capabilities are often [your] true competitive advantage.”
- Beyond protection, patents send a signal to the market. They demonstrate long-term thinking, technical depth and seriousness about defensibility, Maguregui and Horton maintain. “In competitive enterprise sales, especially with health systems and payors, that signal matters,” they state. “It reassures customers that the platform they are adopting is not easily displaced.”
- In the AI context—where skepticism about commoditization is growing—patents help distinguish platforms that are truly differentiated from those that are not. Patents, the authors add, “tell a story about how the technology works and why it is hard to replicate.”
- The piece is worth a top-to-bottom read by industry players and the provider people they sell to.
Have you ever conversed with someone who seemed to know just enough about this or that to come across as an expert—but too little to avoid giving himself away as a pretender?
Or maybe even a quack, if he carried on with his sciolism long enough? Viewed in a certain light, generative AI can be seen as a kind of poseur in its own right. Don’t take it from HealthExec. Take it from the scholar currently considered the greatest mathematician in the world. “It feels to me like a really clever student who has memorized everything for the test but doesn’t have a deep understanding of the concept,” Terence Tao, PhD, tells The New York Times. “It has so much background knowledge that it can fake actual understanding.”
- Why is the question of genAI’s ability to generate new ideas hot right now? Because the founders of an AI startup recently said their technology had collaborated with ChatGPT to solve a thorny mathematical problem. Times technology reporter Cade Metz looked into the key riddle raised by the claim: Did the AI system—Aristotle, developed by Harmonic AI Inc.—“truly do something brilliant? Or did it merely repeat something that had already been created by brilliant humans?”
- It may not matter whether AI is generating new ideas or not—or whether it may one day do better work than human researchers. Regardless, it’s already becoming a powerful tool when placed in the hands of smart and experienced scientists, Metz notes. “These systems can analyze and store far more information than the human brain, and can deliver information that experts have never seen or have long forgotten,” he adds before quoting more subject matter experts. One of these is Derya Unutmaz, MD, a professor at the Jackson [Research] Laboratory and the University of Connecticut. “I am still relevant, maybe even more relevant” in the age of superadvanced AI, Unutmaz says. “You have to have very deep expertise to appreciate what it is doing.”
Also worthwhile:
