AI in Companies: Central Legal Aspects

Illustration: AI Use

The use of AI systems has by now become part of everyday business operations in many companies. Texts are drafted, information is condensed, emails are prepared, and ideas are structured. An increasing variety of AI systems is being offered on the market. Following the initial wave of enthusiasm, many management bodies are therefore asking whether such use is permissible at all and, if so, in what context, with which data, and for what purposes the system may be used.

Within the European Union, AI systems and their use are now regulated in particular by Regulation (EU) 2024/1689 (the “AI Act”). The AI Act follows a risk-based approach under which many simple supportive applications are not subject to strict regulation, whereas AI systems used, for example, in recruitment may qualify as high-risk AI systems.

Legal issues also arise where the use of AI affects personal data, trade secrets, automated decision-making, or sensitive business processes.

I. Regulation of the Use of AI under the AI Act

The AI Act applies not only to developers, but also to operators of AI systems. It therefore covers not only the provider of a tool, but, under certain conditions, also the company using such a system within the European Union. For example, Article 4 AI Act has already required providers and operators since 2 February 2025 to take measures to ensure a sufficient level of AI literacy. Under the current legal framework, further obligations under the AI Act are expected to become applicable from August 2026.

For most typical office applications of generative AI, the AI Act will not be the primary issue from the outset. The position is different, however, where a company uses AI in areas that the AI Act classifies as high-risk. High-risk AI systems will in future be subject to particularly stringent regulatory requirements.

This is highly relevant in business practice. If an AI system is used, for example, to filter applications, evaluate candidates, or prepare or support the assessment of the creditworthiness of natural persons, the company may move beyond the sphere of mere productivity support into the area of regulated high-risk use. For operators of such high-risk AI systems, the AI Act requires, in particular, use in accordance with the instructions for use, the assignment of human oversight, the monitoring of the system’s operation and, depending on the use case, information obligations vis-à-vis affected persons or employees.

The AI Act also contains specific transparency obligations, in particular for certain systems that interact with natural persons, as well as for certain AI-generated or manipulated content. This is not always immediately relevant for the classic internal use of AI systems within a company. It becomes particularly relevant in practice where AI systems are used not only purely internally, but in interaction with natural persons or in connection with AI-generated or manipulated content.

This already points to an important issue for companies: anyone introducing generative AI into a business should not treat the matter as a mere IT issue. Its use forms part of compliance, data protection, information security, and internal organisation.

II. Data Protection Law: The Decisive Factor Is Which Data Is Entered

As soon as employees enter personal data into an AI system, this also constitutes processing within the meaning of the GDPR. A separate legal basis is therefore required for such processing, together with an appropriate assessment of the associated risks. Data subjects must be informed in a transparent manner and, depending on the specific use case, the participation and co-determination rights of the works council must also be examined.

At the outset, it should be examined whether personal data needs to be entered at all and whether it may be entered. An important factor in this context is whether the AI system is self-hosted or at least operated in a protected environment, or whether it is a public solution.

These circumstances are often underestimated in day-to-day business. Many companies try to avoid this assessment simply by removing the name of the data subject. However, merely removing names is often not sufficient to fall outside the scope of the GDPR if the individual can still be identified from the surrounding context. This is particularly relevant for AI systems, as such systems are specifically designed to identify correlations, including from unstructured data. In this context, the use of AI systems may also trigger the requirement to carry out a data protection impact assessment pursuant to Article 35 GDPR.

In this regard, it is advisable to define internal responsibilities in a binding manner, establish clear rules, and raise employee awareness through training.

Where an AI system is public or hosted by a third party, it is important to review the provider’s contractual terms. In the case of publicly accessible or third-party-operated AI systems, particular attention should be paid to whether input data may, under the contractual terms or technical settings, be used for training, analysis, or improvement purposes.

Accordingly, the legally compliant use of AI does not usually begin with the prompt, but with a governance decision:

Who may use which tools, for what purposes, with which data, and subject to which approval or control mechanisms? It is often precisely here that the real management of risk lies.

III. Trade Secrets and Confidentiality

This leads to a point that is frequently underestimated in practice. Many companies now take care not to enter personal data into AI systems in an uncontrolled manner. Less attention is paid, however, to the fact that information without any personal reference may also be highly sensitive from both a legal and an economic perspective.

Accordingly, alongside data protection, the protection of confidential business information also represents a significant risk in the use of generative AI. This may include draft agreements, internal statements, negotiation strategies, pricing calculations, technical developments, source code, M&A documents, security concepts, product roadmaps, or other business-critical know-how. In advisory-intensive or technology-driven companies in particular, entering such information into an externally operated AI system may give rise to substantial risks, even where no personal data is processed.

This applies in particular to publicly accessible AI systems or systems centrally operated by third parties. In such cases, the question regularly arises whether, and to what extent, entered content is processed, stored, logged, or used on the server side for training or improvement purposes. That uncertainty alone may already be problematic from the company’s perspective. In the case of self-hosted solutions or solutions operated within a controlled environment, the risk situation may be considerably more manageable. Even then, however, there remains a need to define internally which information may be entered and which may not.

This is legally relevant not only from an economic perspective, but also with regard to the protection of trade secrets. Under the German Trade Secrets Act (GeschGehG), the statutory protection of a trade secret requires, among other things, that the information be subject to appropriate confidentiality measures. If a company enters sensitive information into external AI systems without clear rules, technical safeguards, or approval processes, this may give rise not only to a factual risk of information leakage. It may also raise the question whether the protection of trade secrets was adequately safeguarded from an organisational perspective.

The use of generative AI therefore requires a clear confidentiality-based approval logic. It is advisable to define which categories of information must never be entered into external systems, for which content only approved tools may be used, and in which cases an internal or self-hosted solution is preferable. Not everything that can technically be entered into an AI system may also be entered from a legal or business perspective.

IV. Liability Risks for Management in the Event of Uncontrolled AI Use

The use of generative AI is not merely a question of operational efficiency, but also an issue of proper corporate organisation. If management fails to address the matter, legal consequences may follow. This will primarily affect the company itself: depending on the individual case, data protection measures and fines, damages claims by affected persons, the loss of trade secrets, and erroneous decisions in sensitive business processes may arise.

Particularly where AI systems are not self-hosted, the risk often lies in the fact that personal data, confidential information, or business-critical content is processed without adequate rules. If appropriate organisational measures are lacking, the risk increases that the use of AI will no longer be regarded as a controlled business process, but instead as a compliance deficiency.

As set out above, under the German Trade Secrets Act (GeschGehG), information is protected as a trade secret under that statute only if, among other things, it is subject to appropriate confidentiality measures. If sensitive information is entered into external AI systems without sufficient safeguards, this may not only be practically dangerous, but may also raise the question whether the protection of trade secrets was organisationally adequate.

The requirements arising under the AI Act also reinforce the need to manage the use of AI systems at management level, to safeguard such use by means of suitable technical and organisational measures, and to accompany it with control mechanisms.

For management, the central point of liability therefore lies in particular in the internal relationship with the company. Managing directors of a German limited liability company (GmbH) are required under section 43 of the German Limited Liability Companies Act (GmbHG) to exercise the diligence of a prudent businessperson; members of the management board of a German stock corporation (AG) are required under section 93 of the German Stock Corporation Act (AktG) to exercise the diligence of a prudent and conscientious manager. If they breach these duties, they are liable to the company for the resulting damage. Applied to the use of AI, this means that where significant legal and factual risks are foreseeable, a complete failure to establish governance, policies, responsibilities, training, and controls may constitute a breach of duty.

V. What Companies and Management Should Now Do in Practice

From a legal perspective, there is currently little reason to impose a blanket prohibition on the use of generative AI. At the same time, there is equally little to be said for allowing such use without any controls. Rather, the current legal framework points to a clear organisational requirement.

Where generative AI is used merely as a writing or research aid without any personal data involvement and without sensitive business information, the legal risks will generally be significantly more manageable. However, once personal data, trade secrets, personnel decisions, customer assessments, or other sensitive processes are involved, the legal requirements increase considerably.

In particular, it is advisable to implement an internal AI policy specifying which tools may be used, which data is off-limits, when approval is required, in which cases outputs must mandatorily be reviewed by a human, and how documentation, data protection, and information security are to be handled.

The legally compliant path therefore does not lie in a blanket yes or no, but in sound governance of AI use. It is precisely against that standard that it will be measured in the years ahead whether companies use AI not only efficiently, but also lawfully.

Note

This article is for general informational purposes only and does not cover all possible circumstances. It does not replace individual legal advice or a case-specific review.

Despite careful preparation, no liability is assumed for the accuracy, completeness, or timeliness of the information. For legal evaluation or implementation recommendations in specific cases, professional legal advice should be sought.

Contact.

Get in touch

I am here to support you with your legal concerns or a non-binding initial consultation. Contact me directly by phone or email.

Direct contact

Email: info@kanzlei-happel.de
Tel.: +49 (6106) 639 24 25
Consultation via email, phone, video conference, or by appointment.