Research

At the Fraunhofer Institute FIT, we stand at the forefront of research in the field of generative AI and large language models. Our interdisciplinary research encompasses all application levels of generative AI. We delve deeply into understanding the complex requirements of these systems, ensuring they align with human-centric values and needs.

Our research addresses the evaluation of technical feasibility as well as the socio-ethical implications of generative AI. This dual approach ensures that our innovations not only expand technological boundaries but also meet societal norms and expectations.

Our commitment to excellence is reflected in our active participation in the scientific community. We regularly publish our work in renowned journals and conferences, sharing and discussing our findings with the research community. This continuous involvement helps us to further develop innovative ideas and stay abreast of the latest technology. At Fraunhofer FIT, we do not just observe digital transformation – we shape it.

Future project: Large Language Models

Internal research projects

Fraunhofer FIT has endowed an internal task force with a budget of 1 million euros to prepare internal projects for the development of ready-to-use solutions for companies in a competitive process. The range of applications for the projects is broad and includes numerous promising areas of use.

Accessible and Automatic Operational Process Improvement using Large Language Models

Process Mining (PM) aims at analysing event data from real world processes to derive information or knowledge on the process. In real world settings analysing and improving processes involves multiple domain experts, who are usually no PM-method- experts. As such it is of high interest to make PM methods and their results accessible and understandable for these domain experts. Large language models (LLMs) can accelerate this task in PM. Although work on PM and LLM emerges, there exists no systematic evaluation of the interplay. Additionally, the utilization of graphs as PM results in collaboration with these experts is barely described and evaluated. Accessible and Automatic Operational Process Improvement using Large Language Modelsproject (AAOP LLM) investigates on two focal points: (1) evaluating LLMs for PM in general by conceptualizing LLM characteristics and abilities which are mandatory for PM and (2) evaluating the ability of LLMs to make process models in the Business Process Modeling Notation (BPMN) interactive, i.e., answering questions, explaining models and adapt models based on the user interaction. We present a software Proof of Concept for the interplay of BPMN and LLM in the context of PM, evaluate the performance of LLMs in interaction with industry users and publish the concept of the evaluation of LLM for PM.

BAföG Chatbot

The project seeks to address the underutilization of federal student aid (BAföG) due to informational gaps and misperceptions of the eligibility criteria. Our BAföG chatbot aims to revolutionize how students access information about BAföG and find out if they are eligible, by making the process more accessible and interactive. This innovation can be extended to other social benefits, and we aim to enhance consultation quality across various state transfer benefits. Leveraging an interactive, dialogue-based interface we have to primary objectives: firstly, to simplify the access to information on BAföG for students. Secondly, to make the processes of eligibility calculation more intuitive. By integrating local open-source Large Language Models (LLM), the chatbot conducts natural dialogues with users, allowing for an intuitive user experience while protecting user data. We utilize legislative texts, structured data from knowledge graphs, and a comprehensive calculation tool developed in-house to provide precise eligibility assessments. In the process the LLM uses the calculator as a tool. This approach not only ensures the dissemination of verified information but also adapts to complex user inquiries efficiently. We plan to have a minimum marketable product by June 2024, which will be able to demonstrate the possibilities that LLMs provide to a wide audience. Furthermore, we plan to use it in various industry projects and will be publishing scientific papers on findings of our research. Finally, we plan to test the efficacy of information treatments with chatbots versus traditional information provision in the context of a Ph.D.-paper.

CyberGuard – Creating Machine Readable Cybersecurity Playbooks with Large Language Models

The cybersecurity industry is leaning toward automation and collaboration in cybersecurity response workflows (or playbooks) to improve incident response capabilities. However, current cybersecurity services lack a unified and standardized approach to documenting and sharing cybersecurity incident processes. Manually creating standardized machine-readable playbooks from the legacy siloed or semi-structured guidelines is time- and resource-intensive for security operators. CyberGuard aims to address this by developing an AI-based methodology that automatically translates and shares cybersecurity playbooks as accurately and effectively as those created manually, using Large Language Models (LLMs). Additionally, the project aims to address the privacy and confidentiality challenges of playbook sharing and enable the exchange of best practices across the industry. CyberGuard will fine-tune openly available LLMs on local datasets, and utilize Retrieval-Augmented generation or prompt engineering approaches. While LLMs have proven their potential in generating content like code and tabular data, a gap exists between expected and experienced results, especially concerning task completion time, accuracy, and user-friendliness in understanding and debugging. Our goal is to advance LLM enhancement strategies, creating a framework that decomposes complex tasks into adjustable individual queries, and ensuring automated refinement loops to obtain syntactically and semantically correct playbooks from the LLM.  The added values of CyberGuard include a prototype solution to provide structured and machine-readable processes in standardized formats which offers flexibility and interoperability in visualizing the guidelines. This project will encourage knowledge sharing and collaboration within the cybersecurity industry and take a step toward automation of response processes while recommending actions for human operators.

ELMTEX – E-health applications using Large Language Models for TEXtual information extraction and processing

ELMTEX is driven by the need to efficiently extract and structure information from unstructured domain-specific texts, such as those found in the medical field. The project is motivated by the inherent challenges associated with processing domain-specific texts, including the presence of synonyms and homonyms, and the lack of comprehensive ground truth datasets. These challenges complicate information extraction in critical medical tasks such as systematic literature reviews of medical scholarly data, ICD coding of clinical admissions, and billing. To address these challenges, ELMTEX aims to leverage the power of sophisticated LLMs such as LLaMA by adapting them to medical texts. This involves fine-tuning these models on carefully selected datasets, helping them to understand the complexities of medical terminology and context. Consequently, the goal of ELMTEX is to automate the information extraction process in the medical domain. To achieve this, ELMTEX will employ word classification, semantic analysis, and other strategies to train models capable of accurately identifying and categorizing information within unstructured text. The process relies on iterative refinement through feedback loops to enhance model performance and to ensure they are applicable to real-world medical data. The main benefits of ELMTEX include creating a strong foundation for processing medical texts and laying the groundwork for future projects that will depend on accurate and structured information. Furthermore, the project will foster partnerships with medical and pharmaceutical institutions, such as Uni Klinik Köln and Uni Klinik Düsseldorf for new applications of the technology, potentially improving how medical information is used in research, clinical practice, and healthcare administration.

ENGAGE-D – Enhancing Gaia-X Data Access through Interactive SD Generation

ENGAGE-D wants to develop an LLM framework that automates data management aspects in the context of data spaces. This includes using retrieval augmentation to allow the LLM to create Self-Descriptions and retrieve linked data for use in the dataspace. Additionally, we also want to automate general data processing tasks related to possible dataspace data analytics services. ENGAGE-D aims to do 2 things: We want to make interacting with dataspaces simpler, by offering a tool that can configure or automate key configuration and operation processes using natural language. We also aim to establish a framework for generative data analytics by reusing and refining the general data processing capabilities built up in this project. In ENGAGE-D Python and langChain are used to configure a pipeline around the LLM that aims to retrieve necessary information for our tasks, such as Self-Description templates, linked-data definitions, and information about data and data sources for processing. We also make use of output manipulation to identify executable code, single out target information for our tasks, and give feedback to the LLM for further processing. The Dataspace configuration part of the framework will be used in our wealth of dataspace focused projects. The data processing part of the framework will be used to advertise our expertise in LLM to potential customers in June of 2024, and likely be reused in upcoming industry projects. We are planning to publish scientific papers about the project results, at least 1 per goal. This project is also aligned with the PhD topics of at least 3 project members.

EnterpriseGPT – A FIT user research initiative

The integration of Large Language Models (LLMs) into corporate environments presents both opportunities and challenges for improving workplace productivity and employee skills. This project, a collaboration between an industry partner and FIT, investigates the implementation and use of an internal LLM application among employees. The LLM application assists employees with various tasks and is currently being piloted, with plans to expand access to thousands of employees worldwide. The research aims to investigate the adaptation, acceptance, and impact of the LLM application on employees' daily work, self-perception, and professional role within the company. The research focuses on the adaptation capabilities of LLMs and aims to meet the dynamic needs of Human–AI collaboration, the factors that influence this collaboration, and the incorporation of human insights to improve LLM performance in overcoming domain-specific challenges, in order to improve acceptance in employees' daily work. To this end, a series of scientific observations, such as surveys and interviews with employees as well as a series of experiments with the industrial partner will be carried out. This initiative is based on an interdisciplinary approach, bridging computer science, innovation management, and interaction design. The project offers the opportunity to gain industry-relevant insights that can be transferred to other industries. FIT will gain expertise in the user-centered design of LLMs and their adaptation to domain-specific requirements. In addition, the formulation of application-oriented guidelines and training certifications positions FIT as a relevant partner for future industry collaborations.

LIKE – Language Model Integrated Knowledge Engine

Knowledge work is pivotal and esteemed in our information and service-driven society. However, it presents several challenges, including its labor-intensive nature, error susceptibility, and time-consuming processes. Moreover, the escalating scarcity of skilled workers exacerbates these challenges, demanding urgent attention and innovative solutions. The objective of the LIKE project is the complete automation of knowledge work processes through utilizing LLM-based agents. These agents should be capable of interacting autonomously, making decisions, and executing various knowledge work tasks. This automation promises to mitigate the inherent challenges associated with knowledge work. Primarily focused on agent orchestration and workflows, the LIKE project explores automation using research paper or white paper creation as a primary example while exploring diverse practical application scenarios. From a technical perspective, the LIKE project delves into developing LLM-based agent communication chains, implementing memory through knowledge graphs, and integrating external tools with the agents. The LIKE project adopts the design science research paradigm as its methodological framework, integrating closely linked and iterative structured design and evaluation phases. Significantly, the emphasis lies in assessing the practical relevance of the developed artifacts in collaboration with industry stakeholders. Evaluation tools include interviews, literature research, and prototype benchmarking to ensure thorough examination and refinement throughout the project lifecycle. The outcomes of the LIKE project primarily contribute to advancing technical and methodological proficiency in LLM agent-based automation. These findings will enrich our understanding of the field and serve as a foundational resource for securing industrially funded research projects during and after the project's duration.

LLM2BIZ – Applied LLM-Workshops for Executives

Our project focuses on developing a comprehensive workshop concept for Large Language Models (LLMs) to equip business executives with practical skills and interdisciplinary understanding. By bridging academic insights with practical application, we aim to empower executives to identify, evaluate, and utilize LLMs effectively in their industries. The workshops will cover ethical, legal, and economic aspects of LLMs, providing participants with didactically high-quality learning opportunities. Additionally, informative whitepapers will delve into crucial topics related to LLMs, enhancing understanding and knowledge dissemination. Our project combines theoretical exploration with practical implementation, addressing compliance requirements, cost factors, and methodological competencies necessary for LLM application. We actively engage in clarifying ethical and legal considerations while exploring economic impacts and opportunities associated with LLMs. Ethical, legal, and regulatory aspects are integral to our project, with workshops dedicated to fostering responsible and ethical LLM usage. By promoting sustainable and responsible LLM utilization, we contribute to the sustainable development goals of the UN such as Decent Work and Economic Growth, Industry, Innovation, and Infrastructure, and Quality Education and Lifelong Learning. Our initiative not only boosts economic growth and innovation but also creates job opportunities and promotes continuous professional development.

MyPersonalChatbot

Effective communication is considered the cornerstone of an inclusive society. In Germany, more than 10 million people face communication barriers due to learning difficulties, intellectual disabilities, language deficits or age. This often leads to violations of fundamental rights such as freedom of expression and access to information. The "MyPersonalChatbot" project aims to reduce these communication barriers by developing an inclusive chatbot. This is based on modern LLM technology and is intended to promote equal access to information and contribute to social participation and self-determination. A participatory process with the active involvement of target groups and stakeholders guarantees that solutions are developed based on real needs. The design of an adaptive user interface, which personalizes the content and combines image- and text-based AI, enables comprehensive communication support. The collaboration with the "Bafög-Chatbot" and “EnterpriseGPT" projects also enables knowledge exchange, the use of a benchmarking system, the development and use of a common LLM technology base and a customizable user interface. In addition, the synergy with the AWIEW project addresses the information needs of people with disabilities in the context of work, which is the prioritized use case for the chatbot. "MyPersonalChatbot" is characterized by the automatic adaptation of the user interface and personalized communication (e.g. easy language). It serves as a demonstrator for FIT, enables the publication of research results and promotes the reuse of experience within the institute. Ultimately, the initiative stands for the commitment to use advanced AI technology for the comprehensive dissemination of information and to promote a more democratic and accessible world through digitalization - in line with Fraunhofer FIT's motto "enabling. digital. spaces.".

PEACH – Personalized Learning Chatbot empowered by Large Language Models and Knowledge Graph

The integration of artificial intelligence in education has seen significant advances with the rise of Large Language Models (LLMs) like ChatGPT. Recognizing their potential to enhance educational methodologies, with the project "Personalized Learning Chatbot (PLEACH)," we address the challenge of LLMs "hallucinating" knowledge by incorporating Knowledge Graphs (KGs) to improve accuracy and reliability. We propose a user-centered design (UCD) approach that combines Knowledge Graphs (KGs), LLMs, and UCD to develop personalized tutors that emulate the effectiveness of one-on-one tutoring. It is a promising approach to surmounting the current challenges of AI-driven personalized learning, such as the need for dynamic educational resources. Besides the actual knowledge, PLEACH proposes the integration of Large Language Models (LLMs) like GPT-4 and knowledge graphs (KG) to create a chatbot that personalizes the educational experience by considering individual learners' psychometric profiles and user-centered design besides the actual knowledge acquired through activities like tutoring and customized exam preparation. Therefore, students can get on-demand feedback for their summaries, e.g., and quiz themselves on relevant topics for their lecture with the system. Utilizing a KG created on multimodal data, like slides, videos, podcasts, and textbooks, aligns them with specific course objectives. This tool will be crafted in collaboration with the University of Hohenheim, and its effectiveness will be tested with students, assessing and iterating to ensure efficiency and cognitive alignment. The project aims to deliver a functional chatbot prototype and provide a service for companies with a template for enhancing their training or knowledge management processes.

POWL – Partially Ordered Process Modeling with GPT

The project aims to merge the advanced capabilities of LLMs with the innovative Partially Ordered Workflow Language (POWL) to significantly enhance the efficiency, understanding, and accessibility of process modeling. Our primary motivation stems from the challenge of creating process models, which requires substantial domain knowledge and expertise in the employed modeling languages. Our aim is to generate process models in standard notations familiar to most professionals in the business process management field, such as Business Process Model and Notation (BPMN) and Petri nets. However, such modeling languages are complex with a high potential for quality issues. For example, it is possible to generate Petri nets or BPMN models with dead parts that can never be reached. We employ POWL for the intermediate process representation as POWL inherently ensures soundness, and the generated POWL models can then be transformed into BPMN and Petri nets. POWL extends partially ordered graphs with control-flow operators to support a wider array of process constructs, presenting a novel solution for modeling complex processes. However, the hierarchical and complex nature of POWL models makes their creation a challenging task, especially for users without deep technical expertise. Our goal is to leverage GPT-4's capabilities to automate the generation and iterative refinement of POWL models starting from textual descriptions in natural language. This initiative aims to lower the technical barriers for process analysts, making process modeling more accessible and intuitive. We will use synthetic and real-life processes to evaluate our approach.

Prosumer GPT

With the European Union and Germany setting ambitious CO2 emission reduction targets, the transformation of consumers into prosumers through new technologies is essential. This project, Prosumer GPT, addresses the challenges faced by customers in making informed investment decisions regarding energy-efficient technologies and managing these assets effectively. The primary goal is to support prosumers in decision-making and efficient operation of technologies like photovoltaic systems, heat pumps, and storage systems through an innovative Large Language Model (LLM). This tool aims to act as a virtual consultant, aiding in both planning and operational phases of home energy management systems (HEMS). The project will develop an LLM as a user interface backed by a Python-based calculation engine. This approach focuses on two main aspects: assisting in energy investments and building refurbishments, and enhancing user-friendliness of HEMS. The LLM will guide users in understanding complex models related to cost, revenue, and CO2 emissions, and facilitate interaction with HEMS for exploring various energy management scenarios. Ethics and privacy considerations are integral, ensuring data sensitivity and regulatory compliance. The methodology aligns with the United Nations' Sustainable Development Goals, promoting affordable clean energy, sustainable communities, and climate action. The Prosumer GPT project is innovative in its approach to simplifying complex energy management and investment decisions for end-users. By leveraging advanced LLM technology, the project seeks to make sophisticated energy models accessible and understandable to the general public, fostering trust and empowerment in managing energy resources. Additionally, it contributes to global efforts in sustainability and digital security. The project's deliverables include a user-friendly interface, an enhanced understanding of energy systems for consumers, and a step towards more sustainable living practices. With its comprehensive plan to integrate LLM with energy management systems, Prosumer GPT stands to benefit a wide range of stakeholders, from individual homeowners to large-scale energy providers.

Selection of relevant publications

  • Gimpel, H., Hall, K., Decker, S., Eymann, T., Lämmermann, L., Mädche, A., Röglinger, R., Ruiner, C., Schoch, M., Schoop, M., Urbach, N., Vandirk, S. (2023). Unlocking the Power of Generative AI Models and Systems such as GPT-4 and ChatGPT for Higher Education: A Guide for Students and Lecturers. University of Hohenheim, March 20, 2023.
  • Guggenberger, T., Lämmermann, L., Urbach, N., Walter, A. and Hofmann, P. (2023) Task delegation from AI to humans: A principal-agent perspective, Proceedings of the 44th International Conference on Information Systems, December 10-13, Hyderabad, India.
  • Duda, S., Hofmann, P., Urbach, N., Völter, F. and Zwickel, A. (2023) The Impact of Resource Allocation on the Machine Learning Lifecycle: Bridging the Gap between Software Engineering and Management, Business & Information Systems Engineering (BISE), forthcoming.
  • Hofmann, P., Jöhnk, J., Protschky, D. and Urbach, N. (2020) Developing Purposeful AI Use Cases – A Structured Method and its Application in Project Management, Proceedings of the 15th International Conference on Wirtschaftsinformatik (WI 2020), March 9-11, Potsdam, Germany
  • Hofmann, P., Lämmermann, L., & Urbach, N. (2024). Managing artificial intelligence applications in healthcare: Promoting information processing among stakeholders. International Journal of Information Management, 75, 102728.
  • Kecht, C., Egger, A., Kratsch, W., & Röglinger, M. (2021). Event log construction from customer service conversations using natural language inference. In 2021 3rd International Conference on Process Mining (ICPM) (pp. 144-151). IEEE.
  • Bayer, S., Gimpel, H., Markgraf, M. (2021). The role of domain expertise in trusting and following explainable AI decision support systems. Journal of Decision Systems, 32(1):110-138.
  • Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., ... & Söllner, M. (2019). AI-based digital assistants: Opportunities, threats, and research perspectives. Business & Information Systems Engineering, 61, 535-544.