• Что бы вступить в ряды "Принятый кодер" Вам нужно:
    Написать 10 полезных сообщений или тем и Получить 10 симпатий.
    Для того кто не хочет терять время,может пожертвовать средства для поддержки сервеса, и вступить в ряды VIP на месяц, дополнительная информация в лс.

  • Пользаватели которые будут спамить, уходят в бан без предупреждения. Спам сообщения определяется администрацией и модератором.

  • Гость, Что бы Вы хотели увидеть на нашем Форуме? Изложить свои идеи и пожелания по улучшению форума Вы можете поделиться с нами здесь. ----> Перейдите сюда
  • Все пользователи не прошедшие проверку электронной почты будут заблокированы. Все вопросы с разблокировкой обращайтесь по адресу электронной почте : info@guardianelinks.com . Не пришло сообщение о проверке или о сбросе также сообщите нам.

Using AI Services securely in TMS AI Studio

Sascha Оффлайн

Sascha

Заместитель Администратора
Команда форума
Администратор
Регистрация
9 Май 2015
Сообщения
1,549
Баллы
155
TMS Software Delphi  Components tmsaistudio


As artificial intelligence becomes increasingly integrated into software development workflows, security and privacy considerations have become important adoption decisions. TMS AI Studio provides developers with LLM-agnostic access to multiple AI services, including OpenAI, Gemini, Claude, Mistral, Grok, Perplexity, DeepSeek, and Ollama. While

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

acts as a wrapper that facilitates seamless interaction with these services, most security concerns relate to the underlying LLM providers themselves rather than the integration layer.

Understanding Model Context Protocol (MCP) Security


One area where specific security considerations arise is in the adoption of MCP (Model Context Protocol) servers. MCP servers can introduce security challenges because they act as bridges between AI models and internal systems, fetching and injecting context into LLM interactions.

Key MCP-specific security risks include:

Context poisoning: Attackers can manipulate upstream data sources to influence LLM outputs without directly compromising the model.

Token theft and credential exposure: MCP servers rely on authentication credentials like OAuth tokens and API keys, which if compromised, can grant unauthorized access to multiple connected systems.

Prompt injection attacks: Hidden instructions embedded in data sources can cause unintended behaviors when processed by the LLM.

Excessive permissions: MCP implementations often request broad permission scopes, creating a single point of failure that could expose email, files, databases, and source code if compromised.

Supply chain risks: Third-party MCP servers from public repositories may contain malicious code or vulnerabilities.

Organizations implementing MCP servers should treat them as privileged services, enforce strong authentication, implement network segmentation, and conduct thorough security reviews of any third-party implementations.

General LLM Security Considerations


Beyond MCP-specific concerns, several security considerations apply broadly across all LLM services integrated into

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

.

Data privacy and retention: Different providers have varying policies regarding how long they retain user inputs and whether data is used for model training.

Cross-border data transfers: Data processed by LLM services may be transferred internationally, raising compliance questions under regulations like GDPR.

Unauthorized access: Without proper authentication and access controls, LLM endpoints can be exposed to unauthorized usage.

Output reliability: LLM outputs can vary with different prompts or model updates, complicating audit trails and reproducibility requirements.



Security Policies and Compliance by Provider


To help you make informed decisions about which AI services to use within

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

, the following sections provide security policy information and data protection details for each supported provider.

TMS Software Delphi  Components tmsaistudio
OpenAI



Country of Origin: United States

Security Policy:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: OpenAI offers a Data Processing Addendum (DPA) for enterprise, API, and ChatGPT Team/Edu users supporting GDPR compliance. Eligible customers can select regional data residency in Europe to comply with local data sovereignty requirements. However, OpenAI received a 15 million GDPR fine from Italy's data protection authority in 2025, with ongoing complaints regarding data accuracy, consent, and processing practices.

TMS Software Delphi  Components tmsaistudio
Google Gemini



Country of Origin: United States

Security Policy: n/a; privacy policy here:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: Gemini supports EU data residency with region-locking to dedicated EU regions (europe-west12, de-central1) for enterprise and team workspaces. The Workspace Privacy Hub provides detailed controls for data governance, automated purging, and admin logs. Consumer plans (Free/Pro) cannot enable regional data residency.

TMS Software Delphi  Components tmsaistudio
Anthropic Claude



Country of Origin: United States

Security Policy:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: Claude offers Data Processing Agreements for enterprise customers to support GDPR compliance. Organizations must control how personal data appears in prompts and outputs, as compliance remains the customer's responsibility. The "Help improve Claude" toggle controls training consentwhen disabled, data retention is limited to 30 days with no model training permitted; when enabled, data may be retained for up to 5 years. Some concerns exist regarding data processing through Anthropic's infrastructure and potential data transfers to the United States.

TMS Software Delphi  Components tmsaistudio
Mistral AI



Country of Origin: France

Security Policy:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: As a French company headquartered in Paris and operating under full EU jurisdiction, Mistral offers strong GDPR compliance. Mistral updated its privacy policy in February 2025 after a GDPR complaint, extending opt-out rights from model training to all users (previously only premium users). Enterprise customers can access Mistral models via La Plateforme, a Europe-hosted API with DPA available, or self-host models for maximum privacy and zero external data exposure. Le Chat Pro users' inputs are not retained or used for training.

TMS Software Delphi  Components tmsaistudio
Perplexity AI



Country of Origin: United States

Security Policy:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: Perplexity claims GDPR compliance but this has not been independently verified by EU data protection authorities. Enterprise Pro data is not used for training, though metadata is processed. The platform uses cookies and third-party analytics even in enterprise contexts. Organizations should conduct their own Data Protection Impact Assessments (DPIAs) before deployment, as compliance claims are based on self-disclosure rather than independent audits.

TMS Software Delphi  Components tmsaistudio
xAI Grok



Country of Origin: United States

Security Policy: n/a; privacy policy here:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: xAI offers a Data Processing Addendum for enterprise customers and maintains a separate Europe Privacy Policy Addendum. The Irish Data Protection Commission launched a GDPR investigation in April 2025 into xAI's processing of personal data from EU/EEA users' public X posts for training Grok. Approximately 25% of European organizations have blocked Grok AI amid privacy concerns. When Private Chat is enabled, conversations are deleted within 30 days.

TMS Software Delphi  Components tmsaistudio
DeepSeek



Country of Origin: China

Security Policy: n/a; privacy policy here:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: DeepSeek presents significant GDPR compliance challenges as all data is stored in China, where privacy protections differ substantially from EU standards. Italy's Garante officially banned DeepSeek citing non-compliance with EU privacy regulations. South Korea's data protection authority also banned DeepSeek for violations of cross-border data transfer regulations. DeepSeek's privacy policy states data may be stored indefinitely, used for AI training, and shared with advertising partners. Under Chinese law, data could be accessed by government authorities if requested. The service provides limited evidence of strong user privacy protections and lacks anonymization or minimal-retention policies.

TMS Software Delphi  Components tmsaistudio
Ollama (Self-Hosted/Local)



Country of Origin: Not applicable (open-source, self-hosted)

Security Documentation: n/a; minimal security policy here:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



GDPR/EEA Notes: Ollama allows complete data sovereignty since models run entirely on local infrastructure with no external data transmission. This provides inherent GDPR compliance as data never leaves the organization's control. However, security responsibility falls entirely on the implementing organization. Research has identified over 1,100 publicly exposed Ollama servers, with approximately 20% hosting models without proper authentication. Organizations must implement authentication mechanisms, network segmentation, firewalls, and access controls when deploying Ollama.

Making Informed Security Decisions


When selecting AI services to use within

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

, organizations should evaluate their specific security requirements, regulatory obligations, and risk tolerance. European organizations with strict GDPR requirements may prefer EU-based providers like Mistral or services offering explicit EU data residency like Gemini and OpenAI enterprise plans. Organizations requiring complete data sovereignty might opt for self-hosted solutions like Ollama despite the additional security management responsibilities.

Regardless of which services are selected, organizations should implement additional protective measures including Data Processing Agreements with vendors, regular security audits, user training on avoiding sensitive data in prompts, and clear policies governing AI usage within their development workflows.


Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

LLM-agnostic architecture provides the flexibility to choose the providers that best align with each organization's security posture and compliance requirements while maintaining a consistent development experience.


Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.





Источник:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

 
Вверх Снизу