Health and human services professionals had the chance to explore the latest trends and innovations in the field at the recent 2024 ISM + APHSA Education Conference & Expo hosted by APHSA in Aurora, Colorado.
As an annual sponsor of the event, we look forward to hearing from industry insiders about the latest issues and concerns for IT in health and human services. From talking to visitors at our booth on the main floor, to hearing a number of presentations by notable experts, we always glean insights that help shape our business.
This year I was particularly interested in gauging the progress of AI in the HHS sector. Here’s a quick recap of what I saw and heard at ISM 2024.
Key takeaway
As might be expected, the growing role of artificial intelligence (AI) in human services was a prominent theme on the agenda. With headlines swirling about the benefits and pitfalls of AI, speakers underlined that while AI holds immense promise for improving efficiency, accuracy, and outcomes, it is crucial to approach its implementation with caution and a deep understanding of its limitations.
Andy Pitman, Director of HHS Strategy at Microsoft, noted in the presentation on “Real World Program Improvements Realized via AI,” that the introduction of AI into human services is not unlike the advent of elevator technology.
Initially, Pitman explained, the public was highly suspicious of passenger elevators. In 1852, Elisha Otis had invented a safety device that prevented an elevator car from falling if the main cable broke, and to allay public concerns he demonstrated his invention at the 1854 Crystal Palace Exhibition in New York. Even so, the first passenger elevator didn’t enter service until 1857, and only under the control of a trained operator. Although we now take the automated elevator for granted, elevator operators persisted well into the 1950s.
Just as elevators transformed the way people moved between floors, Pittman argued that AI has the potential to revolutionize how government professionals work. Nonetheless, it is essential to approach this transformation with a clear understanding of the technology’s capabilities and limitations.
Use cases for AI in HHS
Several potential use cases for AI in human services were explored at the conference.
One way AI large language models (LLMs) can be used is by generating case summaries for long-term cases to support either new or returning case workers. LLMs can also help to prioritize claims or abuse reports by looking for specific criteria associated with acute need or high risk situations. Optical character recognition (OCR) is another technology that can be used to extract data from user-uploaded paperwork. Furthermore, the creation of conversational assistants can assist non-technical users in understanding large sets of program data. And AI-powered policy knowledge assistants can support case workers in making informed decisions.
It’s important to note that many of the AI applications demonstrated at the conference were still in their early stages of development. Technical challenges and limitations were evident in some cases, and the ability of AI to perform tasks beyond pre-scripted scenarios remains a subject of ongoing exploration. For example, in one presentation, the wifi was overloaded by the large number of participants, which made the planned demo impossible.
Challenges around AI
Of course, the enthusiasm surrounding AI’s potential must be tempered by a realistic assessment of its challenges and risks. One of the primary concerns raised at the conference was the potential for AI to generate hallucinations or inaccurate outputs.
Paula Morgan, CIO of the New Mexico Health Care Authority emphasized during KPMG’s presentation “A Human Approach to AI”, that “the over-reliance on AI without understanding the possibility of hallucinations can lead to serious consequences.” To mitigate this risk, Paula urged leaders to be transparent about the capabilities and limitations of AI tools, explaining how they work and how the data they produce should be interpreted within its context.
Another significant concern expressed at the conference is the potential for bias to be inadvertently introduced into AI systems, especially if it learns that bias from humans. Greg Ayers, Deputy Commissioner at Virginia Department of Social Services pointed out that, “There is already a lot of bias in the legislation that workers enforce every day, driven by political choices.” My takeaway is that while AI can be a valuable tool for informing decisions, it should never be used to replace human judgment entirely.
In summary
These are just some of the insights into the potential of AI for health and human services that we found at the 2024 ISM + PHSA Education Conference & Expo. The technology offers us possibilities for improving efficiency, accuracy, and citizen outcomes as long as we approach the implementation of AI with a balanced perspective. It’s important to understand the limitations, mitigate the risks of bias, and ensure that AI is used as a tool to augment human expertise rather than replace it. With all this in mind, I expect to see HHS professionals using AI to improve the quality of care and enhance citizen outcomes.
Learn more
- ISM 2024 agenda
- ISM presentation videos??
- Contact us to talk about AI for HHS