2025-04-09 AI Agent for Laboratory Reporting
Demo Video
OpenELIS AI-Assisted Lab Reporting: An Overview
- 1 1. Background on OpenELIS and Its Global Footprint
- 2 2. AI in Healthcare: LLMs, NLP, and RAG
- 3 3. AI Architecture: Two-Stage RAG Pipeline for Lab Reports
- 4 4. Data Backend: FHIR Data Pipes and the Spark SQL Warehouse
- 5 5. Technical Challenges and Solutions
- 6 6. Future Directions and Ideas to Explore
- 7 7. Some Concluding Words
Imagine if getting insights from laboratory data was as easy as asking a question. The OpenELIS AI-Assisted Lab Reporting project is working toward that reality. It is an initiative to build an AI-powered “lab reporting agent” on top of the OpenELIS Global laboratory information system. In simple terms, this agent allows health workers to query lab data in natural language and receive narrative answers, rather than manually pulling reports or interpreting raw data tables. Eventually, we want the generated output to include analysis-ready datasets and visualization in addition to the narrative data.
Flexible lab reporting: Laboratory reporting needs vary by context. This project aims to let users ask ad hoc questions about lab data and get useful answers on the fly, making data exports more flexible than one-size-fits-all reports.
Operational insights: Beyond just listing results, the AI agent can highlight patterns and insights in the lab data, helping staff understand what is happening in their lab workflows (e.g. identifying testing trends or turnaround times).
Reusability for global health: By building on open standards, the solution is designed to be replicable for different labs and health systems. In particular, it leverages the HL7 FHIR standard for data exchange, making it a model that other countries and systems can reuse rather than a one-off tool.
At a high level, the AI Lab Reporting agent works by combining OpenELIS’s lab dataset with modern AI LLM technology. When a user poses a question (for example, "How many COVID-19 tests were processed last month, and what was the positivity rate?"), the system will automatically retrieve the relevant data from OpenELIS and generate a human-readable narrative answer. This happens through a multi-step pipeline using multiple Large Language Models (LLMs) to fetch the data and to compose the answer. We’ll dive into how this works shortly. The end result is an intelligent assistant that can save time, adapt to different reporting needs, and make lab data more accessible to decision-makers.
1. Background on OpenELIS and Its Global Footprint
OpenELIS Global (Open Enterprise Laboratory Information System) is a well-established open-source laboratory information system (LIS) tailored for public health and clinical laboratories. It is used at a national scale in a variety of settings – from small hospital labs up to national reference labs – and everything in between. laboratory professionals use OpenELIS daily to manage their workload, including logging test orders, tracking specimens, importing results from lab analyzers, and handling complex workflows like pathology and microbiology. By automating work plans and reducing manual steps, OpenELIS helps labs improve turnaround times and result accuracy for better patient care (1).
OpenELIS has been adopted as a global good in digital health, meaning it’s deployed in multiple countries and is freely available for others to implement. For example, OpenELIS is the national LIS for Haiti and Côte d’Ivoire, among other implementations (2). It meets international lab standards (like ISO and SLIPTA) and is designed with data security in mind (1). Importantly, OpenELIS supports standards-based interoperability to communicate with other health information systemswith the addition of FHIR API layer to expose lab data in the HL7 FHIR (Fast Healthcare Interoperability Resources) format (1,2).
All of this makes OpenELIS an excellent foundation for an AI-assisted reporting tool. The system already captures rich, structured lab data and can provide it in a standardized way. The AI Lab Reporting project builds on this by taking the data that OpenELIS stores and turning it into easy-to-understand narratives on demand. Essentially, it’s about unlocking the data – using AI to bridge between the technical data in the LIS and the practical questions a user might want answered.
2. AI in Healthcare: LLMs, NLP, and RAG
Recent advances in Artificial Intelligence, especially natural language processing (NLP) and large language models (LLMs), have created new opportunities in healthcare. NLP enables computers to interpret and generate human language, which is crucial when key clinical data exists in the form of free-text notes, reports, and complex terminologies. LLMs are advanced AI models trained on massive text datasets that can understand context and generate human-like language, allowing them to answer questions, summarize information, and engage in dialogue. In healthcare, this capability presents an opportunity to cut through complexity and make information more quickly and easily accessible to users with various levels of technical knowledge or contextual expertise (3). For example, an LLM could summarize a lengthy lab report or explain medical jargon in simpler terms for a clinician or health program manager.
However, applying LLMs to healthcare data also brings challenges. These models, while powerful, have notable limitations. They do not have inherent access to up-to-the-minute data and are generally limited by what was in their training set. They can also hallucinate: generate plausible-sounding but incorrect or fabricated information (3). And critically, if asked directly, a general LLM like ChatGPT doesn’t know about a specific hospital’s data or a particular lab’s results, since those are not part of its training. On top of that, patient health data is sensitive, and we must be careful about privacy. We wouldn’t want to feed confidential lab records into a public AI model and risk exposing that information.
This is where the concept of Retrieval-Augmented Generation (RAG) comes in. RAG is an approach that combines the strengths of LLMs with specific data sources to improve accuracy and relevance. In a RAG setup, the system first retrieves relevant data from an external source (such as a database or document repository) and then feeds that data into the LLM to inform its answer. The LLM’s response is thus “augmented” by real, up-to-date information. The RAG approach leverages a store of curated data to assist the LLM in answering the question, so the model isn’t limited to its built-in knowledge. The LLM receives both the user’s question and the retrieved data when formulating a response, allowing it to ground its answer in facts (3).
3. AI Architecture: Two-Stage RAG Pipeline for Lab Reports
Underneath the hood of the lab reporting assistant is a two-stage pipeline based on the Retrieval-Augmented Generation design. At a high level, the process for answering a user’s question involves three main steps:
Understand the question and fetch data – A cloud-based language model turns the user’s natural language question into a precise database query (SQL), which is then run on the lab data to retrieve the relevant information.
Validate and execute the query – The system checks the AI-generated query to ensure it’s correct and safe, then executes it on the OpenELIS data (stored in a special database) to get the result set (the raw data answer).
Generate a narrative answer – A local language model takes the result data and produces a human-friendly narrative response that directly answers the user’s question, phrased in clear language.
Let’s break down each of these steps in a bit more detail, to see how the AI and data components work together:
Stage 1: SQL Query Generation using a Cloud-Based LLM
Everything starts with the user’s question, posed in everyday language. For instance, a user might ask: “How many tests were conducted last week and how many were positive?” This question needs to be translated into a form that a database can understand – specifically, a SQL query against the lab data. In Stage 1, the system uses a cloud-hosted LLM to perform this translation. We provide the LLM with a custom prompt, the user’s question, and a connection to the database schema (the tables and columns that hold relevant lab data). The cloud LLM uses knowledge of SQL syntax and the provided database schema context to generate an SQL statement that it predicts will retrieve the information from the database that can serve as a basis for the ultimate answer to the user’s question. For example, the LLM might output something like:
SELECT COUNT(*) AS total_tests,
SUM(CASE WHEN result = 'Positive' THEN 1 ELSE 0 END) AS positive_tests
FROM LabResults WHERE test_date BETWEEN '2025-03-01' AND '2025-03-07';
The key point is that the AI is writing this query so that the human user doesn’t have to. It’s essentially doing the job of a data analyst or IT specialist, and figuring out which tables to pull from and what conditions to apply based on the phrasing of the question. The project chose to use a powerful cloud-based model for this step because crafting accurate SQL from arbitrary natural language can be complex. Cloud LLMs (like those provided by OpenAI or Google) are typically large and very capable with such tasks, having been trained on vast amounts of text that examples of code and queries. Using a cloud service also offloads the heavy computation of understanding and translating the question. Note that only the schema and the question text are sent to the cloud LLM, with no actual patient or lab result data included in this prompt. This is an important design choice to protect privacy, and is possible since the cloud AI’s sole job interpreting the question and produce the right “recipe” (SQL) to get the data needed.
Stage 2: SQL Query Validation and Execution
Once the LLM returns an SQL query, the system doesn’t just run it blindly; the result has to be validated, fixed if necessary, and finally executed on the data. Think of this as a safety check and the actual data retrieval step. The generated SQL is first reviewed by the system to ensure it is syntactically correct and won’t do anything harmful. This may involve checking that the query only selects data (no dangerous operations like deleting data), and verifying that table and field names exist in the schema. If the query looks off (for example, referencing a non-existent field or logically incoherent), the system can correct it or, if needed, even ask the LLM again with some feedback. In many cases, minor edits or guardrails can be applied – for instance, removing any unauthorized fields or adding limits to prevent an excessively large result.
After validation, the query is executed on the Apache Spark database that holds the OpenELIS lab data. This database is essentially a data warehouse that stores data from one or multiple OpenELIS instances. In addition, it stores the data in a standard format, so other data sources can easily be integrated.
The query will retrieve a subset of lab records or aggregated statistics that answer the question. For example, it might return a small table with the total number of tests and positives found in the specified date range. This result set is then passed along to the next stage. At this point, we have the answer in a raw data form (numbers, codes, etc.), but it’s not yet something you would hand directly to a non-technical user. That’s where the final stage comes in.
Stage 3: Result Set Analysis and Narrative Generation (Local LLM)
In Stage 3, the system takes the data retrieved from the query (the result set) and feeds it to an LLM running on a private, local server. This LLM uses the provided context to generate a narrative answer - and in the future possible additional types of outputs. This local model is tasked with turning the raw results into a coherent, conversational response that directly addresses the original question. Following our example, the local LLM might receive a table that includes information like: total_tests = 500, positive_tests = 50 (along with the context that this pertains to last week’s COVID-19 tests), and it might generate a sentence or two such as: “Last week, the lab processed 500 tests in total, out of which 50 were positive, yielding a 10% positivity rate.”
Because this stage involves sensitive clinical data, the design uses a local LLM that runs within the trusted environment of the health system. This way, none of the patient-specific or site-specific data ever leaves the local infrastructure. The local LLM can be a slightly smaller model that is still capable of good writing but can be hosted on a secure server. Although local models may be a bit slower or less fluent than the absolute cutting-edge cloud models, they have improved dramatically, and can be further optimized for the task at hand. Crucially, using a local model ensures privacy, since the identifiable or sensitive information is never sent to an external service, and thus remains under the health program’s control.
The output from Stage 3 is the final answer to the user’s question in narrative form. The user, for example, sees a friendly explanation of the lab statistics they asked about, rather than just a database printout. From the user’s perspective, they asked a question in plain English and got an answer in plain English – the complex querying and data parsing all happened behind the scenes.
In summary, the architecture uses two AI models in tandem: one (cloud-based) to act as a clever data miner, and another (local) to act as an eloquent reporter. This division of labor is intentional to get the best of both worlds: the cloud LLM handles the heavy-duty reasoning with no sensitive data involved, while the local LLM handles the sensitive information safely. Next, we’ll discuss how the OpenELIS data is prepared and managed so that the Stage 1 query and Stage 2 execution can work smoothly.
4. Data Backend: FHIR Data Pipes and the Spark SQL Warehouse
To support this AI-driven reporting, the project needed a robust data backend that could be easily queried. OpenELIS itself stores data in a transactional PostegreSQL database that works great for day-to-day lab operations. However, running analytical queries on such a database, especially when it is supporting a live system, is inefficient and potentially disruptive.
Fortunately, the data in OpenELIS is exposed via a widely-used health data standard called FHIR (10). Unfortunately, this format is also not designed for efficient analytics using LLMs; FHIR is deeply structured with levels of nested resources and references, which isn’t straightforward for an LLM to navigate. To solve these issues, we decided to connect our RAG to a FHIR-based analytics platform, using a global good provided by Google’s Open Health Stack called FHIR Data Pipes (9). Using this approach, we created a separate, analytics-friendly database that syncs using micro-batching with the OpenELIS data.
For some context, FHIR Data Pipes is an open-source extract-transform-load (ETL) toolkit developed to take data from a FHIR server and transform it into a format suitable for analytics. In our case, we are using the tooling to load the data into a SQL-accessible data warehouse (4). The toolkit pulls data through the OpenELIS FHIR API (which provides standardized FHIR resources representing lab orders, results, patients, etc.) and converts it into a set of flat tables stored as Apache Parquet files. Parquet is a columnar data storage format that is very efficient for large-scale analytics. The use of Parquet means the resulting dataset can be queried using Apache Spark’s SQL engine with high performance. (5).
One of the advantages of this approach is that we can pre-process and flatten complex data before the AI ever sees it. Health data in FHIR is normalized into many interlinked resource types (Patient, Observation, Encounter, etc.) for interoperability. But an analytical query – especially one that an LLM generates – is simpler to formulate if much of that data is already joined together in one or a few larger tables. FHIR Data Pipes supports creating such “flattened views” using FHIR “ViewDefinition” resources (5) (4). For example, we can create a custom view that combines patient demographics with test results and test orders into a single wide table. By doing this flattening, the eventual SQL queries need fewer complex operations, which not only improves performance but also reduces the complexity that the LLM has to handle. Our team identified early on that minimizing complicated joins is important for both accuracy and speed of the AI’s queries, since excessive joins can slow down queries significantly or cause errors (5).
The resulting Spark database (sometimes dubbed the “Spark DB repository”) acts as the knowledge base for the AI agent. It essentially holds a frequently-updated copy of the OpenELIS data, but optimized for querying. The warehouse can be updated on a schedule (for instance, via nightly runs of FHIR Data Pipes, or even near-real-time incremental updates) so that it stays in sync with the latest lab results. Because it’s separate from the live OpenELIS system, queries running here won’t impact the performance of the lab system used by technicians. And because it’s built on open standards (FHIR, Parquet, SQL) and open-source tools, it aligns with the project’s goal of being a reusable and transparent global good.
From the AI’s perspective, this Spark-based warehouse is the “external data source” it taps into during the RAG process (5). When the cloud LLM creates an SQL query, it targets this warehouse. When the query runs, Spark SQL quickly scans the Parquet data and returns the result. The local LLM then uses this result set to generate the narrative answer using the contents of this warehoused data filtered down to the requirements of the question.
The use of Apache Spark also means the system can scale to large volumes of data. If a country has millions of lab results, the Spark engine is designed to handle big data across distributed computing resources. For the pilot phase, the system might run on a single server, but it has the flexibility to scale out horizontally if needed.
In essence, FHIR Data Pipes and the Spark database serve as the bridge between OpenELIS and the AI. They ensure that the data is in the right shape and right place for the LLMs to do their job effectively. This setup also future-proofs the solution to some extent: if OpenELIS or another system produces FHIR data, the same pipeline can be used to create an analytics DB, and the AI agent approach would still work. It’s a generic pattern – fetch standard health data, convert to flat tables, then let AI at it with SQL – that could be applied beyond just this one project.
5. Technical Challenges and Solutions
Data preparation and Query Performance
Challenge: Health data in FHIR (and in OpenELIS) is highly relational, meaning answering a question might involve linking many tables (e.g. patient, encounter, test, result, etc.). If the AI-generated SQL had to join too many tables, it could become slow or convoluted.
Solution: Prepare flattened, denormalized tables in advance for common query patterns. By using FHIR Data Pipes’ view definitions, the project transformed complex relational data into simpler flat structures, minimizing the number of joins needed (5). This dramatically streamlines data retrieval and improves performance. Essentially, the heavy lifting of combining data is done upfront in the ETL stage rather than at query time.
Terminology and Concept Mapping
Challenge: Users might ask questions using colloquial or clinical terms that don’t exactly match the database fields. For instance, someone might query “COVID tests” but the data might store them under specific assay names or LOINC codes.
Solution: Implement a layer of terminology mapping or alias recognition. The team began addressing this by curating common synonyms and making sure the prompt to the LLM includes those or by instructing the LLM to interpret certain phrases. In the future, this could be improved by integrating medical dictionaries or ontology services so that the AI can recognize, for example, that “CBC” refers to a complete blood count test, or that “blood sugar” relates to a glucose lab result. Recognizing hierarchical relationships (like a category of tests) is also important so that a question about "cholesterol tests" can map to all relevant test codes. This is an ongoing effort; accurate interpretation of user intent in a clinical context requires handling the variability in how things can be described (5).
Prompt engineering for the LLM
Challenge: Getting the cloud LLM to consistently produce a correct SQL query required careful prompt design. If the prompt is too vague, the model might produce a wrong or inefficient query; too strict, and it might not handle novel questions.
Solution: The team iterated on the prompt given to the LLM, providing clear instructions and context. This included showing the schema, specifying the desired format (e.g. “return a SQL query that…”), and even adding example question-query pairs as guidance. By customizing the prompt with specific instructions and context, they influenced the model to focus on the right tables and filters (5).
Automated SQL validation and correction
Challenge: Even with good prompting, the AI-generated SQL isn’t always perfect. It might have minor syntax errors, or it might run but return an unexpected result (maybe the question was ambiguous).
Solution: Develop a validation step where the query is parsed and reviewed. The system checks for common issues (like missing quotes, or misuse of a column). If an error is caught (for example, the database returns an error), the system can autonomously do a quick fix or even call the LLM again, providing the error message and asking for a corrected query. Additionally, optimization checks were considered – ensuring the query has appropriate filters (so it doesn’t try to scan the entire dataset unnecessarily) and adding limits if needed. By putting in this rigorous validation and cleanup process, the project aimed to prevent inefficient or broken queries from running (5).
Privacy and data security
Challenge: As noted earlier, protecting sensitive health data is paramount. A major challenge was how to use a cloud AI service without exposing patient-level information to it.
Solution: The architecture itself was the solution here – splitting the workload between a cloud LLM (which only sees schemas and aggregate questions) and a local LLM (which sees actual data). This way, no raw identifiers or individual results leave the local environment. Moreover, the retrieval step (SQL query) inherently enforces security because it only pulls data the user is allowed to query. The system can leverage the existing access controls of OpenELIS by ensuring the queries respect those rules (for instance, a user from Lab A cannot query data for Lab B if the underlying data access is partitioned). Using RAG in this controlled manner means the LLM’s output is always based on data the user could have accessed anyway, just presented more conveniently (3).
Local model performance considerations
Challenge: Running an LLM locally (Stage 3) can introduce latency. Large language models typically require significant computation, and if using a smaller server on-premise, responses might take several seconds or more, which could impact the user experience.
Solution: Optimize and right-size the local model. The team experimented with reasonably sized open-source models that can run on a GPU or even CPU if needed, and balanced speed with output quality. Techniques like quantization (which reduces model size and speeds up inference) can be used to improve response times. In practice, the narrative generation for a single query is often just a few sentences, which is not heavily demanding, so even a moderately sized model can handle it in acceptable time. Additionally, by focusing the local LLM’s input (providing it only the concise results, not huge datasets), the processing is faster. While there is still a trade-off – extremely fast responses might not be possible with fully local processing – the team considers the current performance workable for a reporting context. Future improvements might include using more efficient model architectures or hardware acceleration to further speed this up.
Cloud LLM usage and cost
Challenge: Relying on a third-party cloud AI (Stage 1) raises the question of cost and scalability. Many cloud LLM providers charge per usage (per token or request), and heavy use in a production environment could become expensive.
Solution: During prototyping, the project took advantage of free or research-tier offerings (for example, some platforms offer free quotas or grants for development) (5). For the long term, the team is exploring options to mitigate cost – such as optimizing prompts to be shorter (reducing tokens), caching frequent query results, or even fine-tuning a smaller open-source model to eventually replace the need for a large cloud model. The field of AI is evolving quickly, and open-source LLMs are getting better by the day (6). If an open model can be trained to handle the SQL generation task well, it could be run locally too, eliminating cloud costs altogether. Until then, careful monitoring of usage and perhaps restricting very complex queries will help manage the expenses. The decision of using a cloud LLM is weighed against its benefits, and if the project scales up, the team will conduct a cost-benefit analysis (including exploring different cloud providers or on-prem solutions) (5).
Each of these challenges provided valuable insights. By addressing them, the project not only made the current solution more robust but also gathered knowledge that will inform future improvements. It’s worth noting that some challenges (like terminology mapping) are not fully “solved” yet – they remain active areas of development. But identifying these needs early means the team and community can work on enhancements (for example, incorporating a terminology service or improving the underlying data model) to continuously refine the system.
6. Future Directions and Ideas to Explore
The OpenELIS AI-Assisted Lab Reporting project is at the frontier of merging laboratory informatics with AI, and there are many exciting possibilities to extend and improve it. As the project moves forward, here are some future directions and ideas being considered:
Multi-step queries and dialogue:
Enabling the system to handle more complex questions by breaking them down into sub-queries or an interactive dialogue. For example, if a user asks a very broad question, the agent could ask a follow-up for clarification or internally decompose the query into parts. This question decomposition could improve accuracy for complex analytics (5). Moreover, a conversational interface would allow users to refine their queries (“Actually, show me last month instead”) and drill down into details, making the AI agent more of an interactive data assistant.Enhanced terminology support:
Continuing to improve how the AI understands various ways of referring to tests and results. This might involve integrating standard healthcare terminologies (LOINC for lab tests, SNOMED CT for conditions, etc.) so that the system recognizes when a user’s casual term maps to a standardized concept. In practice, the AI agent could use a terminology service to expand queries – e.g., if asked about “malaria tests,” it would know to retrieve data for all the specific test codes used for malaria. This would make the agent more robust for users who may not use the exact technical term in their question.Better narrative generation and explainability:
Enhancing the local LLM’s output to not only provide an answer but also brief explanations or context when appropriate. For instance, if a result is unusual, the narrative might add, “this is higher than typical for the lab in that period.” Another aspect is ensuring the narrative is factually accurate and traceable to the data. One idea is to have the system cite the data points it used (similar to how this document uses citations) – effectively, the AI could include references like “(according to the lab records)” or even provide the option to see the raw data behind the narrative. This can build trust that the AI’s answers are grounded in real data.Performance optimization for local AI:
Exploring ways to make the on-premise AI faster and more efficient. This could involve trying out new lightweight LLM architectures as they become available, or running the model on specialized hardware like AI accelerators. If the project scales to many concurrent users, techniques like batching requests or maintaining the model in memory between requests (to avoid re-loading it each time) could yield faster responses. The goal is to approach a near real-time experience, where the delay between question and answer is minimal, making it feel seamless to users.Full open-source AI stack:
Currently, the solution uses a proprietary cloud LLM for the SQL generation. In the future, the team hopes to replace or supplement this with an open-source model that can do the same task. Recently, community-driven LLMs have made great strides (6). By fine-tuning one of these on a corpus of lab-related questions and SQL (possibly using the logs from this project itself, with de-identified queries), it might be possible to achieve similar performance without needing a cloud service. An open-source text-to-SQL model running locally would address both the cost and data dependency concerns and make the entire pipeline open source end-to-end.Wider applicability and integration:
The architecture used here could be applied beyond OpenELIS. In the future, we might see AI assistants for other health data systems, like OpenMRS (electronic medical records) or DHIS2 (health statistics). Because the approach leverages FHIR and standard SQL, one can imagine a similar agent answering questions on patient medical records or public health indicators. In fact, the team designed the solution to be fairly generic: any data that can be represented in FHIR and loaded into a Spark/SQL warehouse could potentially be plugged into this AI framework. This opens the door to a family of AI-assisted reporting tools across different domains of healthcare. For now, the focus is on labs, but it’s easy to see how the lessons learned here could benefit other systems.User interface and usability:
As the backend matures, another future area is improving how users interact with the AI agent. This could mean integrating the question-answer interface directly into the OpenELIS web application, so that a lab manager can ask questions right where they normally work. It also means gathering user feedback on the answers: was the answer helpful, do they need more detail? That feedback loop can guide further tweaking of the AI’s behavior (for example, if users often rephrase a question, it might mean the AI needs to better handle the original phrasing). Usability testing in real lab environments will be crucial to fine-tune the system for actual workflows.Continuous learning and adaptation:
The healthcare landscape and lab data trends change over time. Future iterations of the AI agent might include mechanisms for learning from new data or from user interactions. For instance, if users frequently ask a type of question that wasn’t anticipated, the team can incorporate that into the system (either by adding new views, adjusting the LLM prompt, or expanding the terminology mapping). The agent could also be updated as new guidelines or metrics become important (e.g., if a new infectious disease emerges and labs start tracking it, the AI should learn about the new test codes and names). Keeping the AI updated will be an ongoing effort, potentially aided by community contributions given the open-source nature of the project.
7. Some Concluding Words
The OpenELIS AI-Assisted Lab Reporting project is a community-led effort to build an AI-powered global good at the intersection of health informatics and artificial intelligence. It provides a glimpse into how routine data reporting and analysis might be transformed in the coming years by such agents, as workflows move from using static reports and manual queries to interactive, intelligent systems that can converse with users and deliver insights in the right place, at the right time, and in the desired format. Despite the challenging and novel nature of the project, we believe our progress so far demonstrates that this can be a viable and useful direction to pursue. By harnessing the power of LLMs in a safe, targeted way and leaning on interoperability standards like FHIR and existing global goods, this project aims both to improve lab reporting for OpenELIS and to contributing to a framework that others in the digital health community can build upon. Please reach out to us with any ideas, feedback, or suggestions for collaboration - we would love to build a community around the application of AI methods in the healthcare space!
References