Microsoft's Healthcare "Aim" Is Off
Its Deal With Harvard Health Publishing is Small Bore
Trouble starts when means conquer ends. The hard thing is positioning novel objectives, not finding the technical potential to reach them.
Sebastian Herrera, technology reporter for The Wall Street Journal, was the first to break the news last night (Microsoft Tries to Catch Up in AI With Healthcare Push, Harvard Deal):
“In an effort to steal a march on its more-advanced rivals, the company has seized on healthcare as a lane in which it believes it can deliver a better offering than any of the other major players and build the brand of its Copilot assistant.
A major update of Copilot scheduled for release as soon as this month will be the first to reflect a new collaboration between Microsoft and Harvard Medical School. The new version of Copilot will draw on information from the Harvard Health Publishing arm to respond to queries about healthcare topics.
Microsoft will pay Harvard a licensing fee.
In an interview, Dominic King, MD PhD, vice president of health at Microsoft AI, declined to discuss the arrangement with Harvard but said the company’s aim is for Copilot to serve answers that are more in line with the information users might get from a medical practitioner than what is currently available.
While the Harvard Health Publishing literature includes mental-health material, Microsoft declined to say how the updated Copilot would handle such questions.....”
Meanwhile, Illinois Governor JB Pritzker in August signed the Wellness and Oversight for Psychological Resources Act. The legislation prohibits the use of AI for conducting behavioral health therapy, creating treatment plans and diagnosing consumer conditions. The bill essentially positions Microsoft Copilot-like and ChatGPT-like objects for administrative purposes, say efficient note taking.
The law was passed unanimously by the Illinois legislature.
[The only other piece of legislation to tackle this issue was passed in Nevada (see AB406 Overview). And similar legislation is pending in Utah that mental health chatbot suppliers may not sell or share health information (see H.B. 452 Artificial Intelligence Amendments]
One of the architects of the Illinois legislation was Kyle Hillman, CAE, CMM, of the National Association of Social Workers. From Hillman’s perspective, the rationale behind the bill is simple: “An algorithm is not a therapist who is licensed, educated, and credentialed. Be careful that the quest for efficiency doesn’t take over what the job is.”
In other words, healthcare isn’t technology.
When allowed to act on their own in a world that has reached Peak Complexity, whether embodied as chatbots, robots or simply outputting algorithmically derived judgements, mindless machines carry enormous risks along with their enormous powers. Unable to question their own actions or appreciate the consequences of their own programming — unable to understand the context in which they operate — they can wreak havoc, either as a result of flaws in their programming or through the deliberate aims of their programmers.
Yes, we know how to make machines think. What we don’t know is how to make them thoughtful. In an interview today with Open Minds (Eyes on AI), Hillman :
In Illinois, mental health therapy is a licensed profession. Therapists must complete rigorous education, meet clinical training requirements, and adhere to strict ethical standards. AI is none of these things, it holds no license, no accountability, and no human understanding of the people it serves. He referred to the current state of AI as something akin to “the wild west”
There is a place for AI, but conversations are necessary. Those conversations include ethical concerns, data privacy, data ownership rules, and necessary safeguards.” He said that policymakers need to explore the ethical considerations in allowing AI to analyze clinician-patient sessions—particularly what is the line between helpful insights and harmful surveillance?”
Mr. Hillman noted two other issues that concern the use of AI in the behavioral health field. One is liability and risk management. In systems where therapy sessions are recorded, those recordings are ‘discoverable’ in the event of a lawsuit. Currently, only notes are discoverable. He also pointed out an unanswered concern—if consumers and therapists know they are being recorded, does it change what they say and how they interact?
The other issue he discussed is the disposition of consumer data. In his view, the value of start-up tech companies is in the consumer data that they control. While they promise anonymity, “I could see a 23andMe situation where defunct companies’ data is transferred to another entity. We need policies codified in law that data can’t be sold, collected, or resold. Data is the most precious thing you have as clinicians, and we must protect it.”
You have to give technology what it wants, which is more technology.
But if our minds can’t tell better stories, we can’t consciously create better strategies; we can only create by accident. So right now most big technologies are stray thoughts without a narrative. We are confusing means for ends, mistaking operations for strategy. We have smartness without direction. Julie Yoo is a General Partner at Andreessen Horowitz, and I think very smart, but her vision that “AI is becoming a New Site of Care” underscores the problem with the tech-led brand of imagination:
”The first iteration of digital health was basically a skeuomorphic recreation of a traditional doctor’s visit on a computer screen. Now, we’re seeing AI-native companies take full advantage of the native capabilities of AI to create something different and indispensable: the supply-abundant, engagement-centric, always-on healthcare system of the future.”
I don’t see how this breaks the mold of the status quo future. I don't see the invention of a new language. I don’t see the interzone.
Confession: I had to Google the definition of skeuomorphic.
Breaking ‘Main Character Syndrome’
New market narratives place technology actors in the background, not as main characters but in the crowd scene, extras in a screenplay that defines progress in economic terms, not technological. It’s a story that finds value alignment with humanity in a way that feeds the insatiable demand for data to sustain the new economic system(s) of the future, of which ‘the production of data’ is the foundation to competition.
Which is the backstory for this Microsoft deal with Harvard.
Which was the logic for OpenAI offering $500 million for Medal.
Which is core to the many copyright lawsuits against OpenAI.
Which is the white space for the pharmaceutical industry to reshape and redefine the next cycle of market innovation. (For better perspective, see Can Big Pharma Save Big Tech From Strategic Collapse? published this week by Blue Spoon.)
Everyone should be “aiming” for something bigger. The goal isn’t to understand and then market what things are made of (the technical potential, the ‘mechanism of action’, the Target Product Profile), but how the pieces cohere and compete as an integrated economic whole. This is next-level thinking, where the vision thing is about breaking the loop of old verbs that keeps so much investment from sparking fresh growth.
The way to break that ‘center of narrative gravity’ — to spark and then sustain ‘A Portfolio of Amazon-Like Objects’ — is with a different ignition point, new words and new concepts to power better story performance. (Blue Spoon has posted a series of leadership white board sessions to begin this process on its website, available here for a limited time).
The Nobel Prize in Chemistry in 1977 was awarded to Ilya Prigogine for his idea that change in a system, instead of being orderly and stable, is continually “fluctuating” and may move far from equilibrium. Driven by self-amplifying feedback loops, at times these fluctuations may become so powerful that they “shatter” the whole system. It’s at this moment when things either disintegrate into chaos or leap onto a higher level order of self-organization.
From Prigogine’s speech at the Nobel Banquet, December 10, 1977
“There is indeed a problem. Science has created a form of cultural stress expressed by the famous “two cultures” motto of Lord Snow. Science for the benefit of humanity is only possible if the scientific attitude is deeply rooted in the culture as a whole.
This implies certainly a better dissemination of scientific information in the public on one side but also on the other a better understanding of the problems of our time.”
Indeed.
/ jgs
John G. Singer is Executive Director of Blue Spoon, the global leader in positioning strategy at a system level. Blue Spoon specializes in constructing new industry narratives.