Sherpa.ai Tecnología de Inteligencia Artificial
Privacy-Preserving
La tecnología más avanzada y completa del mercado para desarrollar Inteligencia Artificial con Privacidad de Datos.
“Federated Learning accelerates model development while protecting privacy.”
“Federated Learning: A managed process for combining models trained separately on separate data sets that can be used for sharing intelligence between devices, systems, or firms to overcome privacy, bandwidth, or computational limits.“
While Federated Learning is a nascent technology, it is highly promising and can enable companies to realize transformative strategic business benefits. "FL is expected to make significant strides forward and transform enterprise business outcomes responsibly.”
“Federated Learning: AI's new weapon to ensure privacy.”
“Federated Learning allows AI algorithms to travel and train on distributed data that is retained by contributors. This technique has been used to train machine-learning algorithms to detect cancer in images that are retained in the databases of various hospital systems without revealing sensitive patient data.”
Federated
learning
Tras de años de investigación, Sherpa.ai ha desarrollado la plataforma de aprendizaje federado más avanzada para la privacidad de los datos, que está teniendo un gran impacto en el mundo académico y la industria.
El Aprendizaje Federado es un paradigma de Machine Learning destinado al aprendizaje de modelos a partir de datos descentralizados, tales como datos ubicados en los smartphones de los usuarios, en hospitales o en bancos, y garantizando la privacidad de los datos.
Esto se logra entrenando los modelos localmente en cada nodo (por ejemplo, en cada hospital, en cada banco o en cada smartphone), compartiendo únicamente los parámetros actualizados en cada nodo, (sin compartir los datos del usuario) y agregándolos de manera segura para construir un mejor modelo global.
Beneficios
Companies are implementing Federated Learning as their data remains locked in their servers and only the predictive model with the knowledge acquired is transferred between parties.

Modelos colaborativos más potentes, o no factibles utilizando soluciones estándar, sin intercambiar datos privados.
Privacidad de datos por diseño, sin riesgo de que se vean comprometidos.
Cumplimiento de las Regulaciones. Los datos nunca abandonan el entorno de las partes involucradas.
Menor riesgo de filtraciones de datos. Se reducen las posibles áreas de ataque.
Transparencia sobre cómo se entrenan los modelos y cómo se utilizan los datos.
Dónde aplicar
Federated Learning
Esta tecnología es disruptiva en los casos en los que es obligatorio garantizar la privacidad de los datos.
Cuando los datos contienen información confidencial, como datos privados de pacientes, información financiera personal y cualquier otra información confidencial.
Debido a la legislación de privacidad de datos, las instituciones de salud, los bancos y las compañías de seguros, no pueden compartir registros individuales, pero se beneficiarían de mejoras en el aprendizaje automático si entrenaran modelos con datos de otras entidades.
Dos partes quieran aprovechar sus datos sin compartirlos. Por ejemplo, dos compañías aseguradoras podrían mejorar la detección de fraude, entrenando modelos a través del aprendizaje federado, de modo que ambas compañías dispondrían de un algoritmo predictivo de gran precisión, pero en ningún momento compartiría con la otra parte los datos de su negocio.
Federated learning paradigms
Federated Learning can be classified into horizontal, vertical and federated transfer learning, according to how data is distributed among the agent nodes in the feature and sample spaces.
HORIZONTAL FEDERATED LEARNING

Horizontal Federated Learning is introduced in those scenarios, where data sets share the same feature space (same type of columns) but differ in samples (different rows).
Use cases: Diagnosis of diseases.
VERTICAL FEDERATED LEARNING

Two parties or companies want to take advantage of their data without sharing it. In this case, to perform the prediction, both parties need to have the same clients or users in common.
Use cases: Two insurance companies could improve fraud detection training models through federated learning so that both companies would have a highly accurate predictive algorithm, but they would not share their business data with the other party.
FEDERATED TRANSFER LEARNING

Two parties or companies want to take advantage of their data without sharing it but they only have very few clients of users in common.
The system can learn from common users and transfer de knowledge and apply it with news clients.
Use cases: Two insurance companies could improve fraud detection, training models through federated learning, so that both companies would have a highly accurate predictive algorithm, but they would not share their business data with the other party.
DIFFERENTIAL PRIVACY
ON TOP OF EVERYTHING
Differential Privacy is a statistical technique to provide data aggregations, while avoiding the leakage of individual data records. This technique ensures that malicious agents intervening in the communication of local parameters cannot trace this information back to the data sources, adding an additional layer of data privacy
FULLY INTERATED WITH
DIFFERENTIAL PRIVACY

NOISE ADDING MECHANISMS FOR
DIFFERENTIAL PRIVACY
- The Gaussian Mechanism which adds Gaussian noise is implemented in cases where accuracy maximization is what the model is aiming for
- The Laplace and Exponential Mechanisms are implemented in those models in which privacy preservation is the top priority.
- The Laplace and Gaussian mechanisms are focused on numerical answers in which noise is directly added to the answer itself. On the other hand, the Exponential Mechanism returns a precise answer without added noise, while still preserving Differential Privacy
This Federated Learning and Differential Privacy platform is highly flexible and scalable. Therefore, further Differentially Private mechanisms can be added.

ROBUST DEFENSE MECHANISMS
AGAINST ADVERSARIAL ATTACKS
DEFENSE AGAINST PRIVACY/INFERENCE ATTACKS
Federated Learning models, if not prevented, can be tricked into giving incorrect predictions and be able to give out any desired result. The process of designing an input in a specific way to obtain an incorrect result is an adversarial attack. These attacks are aimed at infering information from the training data.
DEFENSE AGAINST
DATA ATTACKS
The best way to check if a defense is satisfactory is to test it with different types of attacks. Therefore, a wide range of attacks have been designed in order to verify that the models are completely private.

DEFENSE AGAINST
MEMBERSHIP INFERENCE ATTACKS
While at all times meeting organizational requirements and guaranteeing data privacy, in accordance with current legislation.

DEFENSE AGAINST POISONING ATTACKS
Poisoning attacks pursue to compromise the global training model- Here, malicious users inject fake training data with the aim of corrupting the learned model. affecting the model’s performance and accuracy
DEFENSE AGAINST
BYZANTINE ATTACKS
With Sherpa.ai’s advanced mechanisms the defense of the federated model from malicious attacks aimed at reducing the model's performance is ensured. Therefore, the protection is based on the identification of those clients with anomalous performance in order to prevent them from participating in the aggregation process.

DEFENSE AGAINST BACKDOOR ATTACKS
Unprecedented algorithms capable of nullifying backdoor attacks have been established. With this technology, an increase of the performance and security of its models is achieved.

BIAS PREVENTION IN
FEDERATED LEARNING MODELS
BIAS PREVENTION
At Sherpa.ai we have developed a technology capable of tackling this problem, formed due to particularities of the data stored, and solving it in the most efficient way possible. To do this, we adjust, particularize and adapt each model to each client while preserving global learning.

PREVENTING BIAS THROUGH
PERSONALIZATION
This is achieved by dynamically modifying the device loss functions in each learning round, so that the resulting model is unbiased towards any user.

Other technological aspects
of sherpa.ai´s privacy-preserving technology
Secure Multi-Party
Computation
By doing this, it is ensured that your business’s sensitive data is secured, without undercutting your ability to acquire all the necessary information needed from this data.

Private Entity
Resolution
With the use of cutting edge cryptographic techniques, the synchronization and identification of these datasets is possible while always protecting privacy and never affecting the performance of the trained models.

SYNTHETIC DATA GENERATION
Sherpa.ai’s technology makes use of advanced synthetic data generation to eliminate security loopholes such as membership. With this unconventional solution, the ability to move away from the use of standard methods is gained, which greatly reduces communication costs without degrading the accuracy of the predictive model. This generates the ability to obtain the underlying structure and show the same statistical distribution from the original data, rendering it undistinguishable from the real one.

Compliance

Privacidad
Para Sherpa.ai la privacidad de datos es un valor ético fundamental.
Por este motivo nuestra plataforma cumple con todas las regulaciones actuales en Protección de Datos (RGPD) y está en línea con el borrador de la nueva normativa sobre la regulación de la Inteligencia Artificial de la Unión Europea.

Seguridad
La seguridad de la información es una prioridad absoluta en Sherpa.ai.
Creemos que la seguridad debe cumplir con los estándares de calidad, así como con toda la normativa al respecto. Por ello, estamos certificados en el estándar de seguridad de datos ISO-27.001 y nuestra plataforma ha sido galardonada con los premios CogX2021 por su Contribución Sobresaliente para la Regulación Tecnológica y finalista como Mejor Solución para Privacidad y Protección de Datos.
CONTACTA CON SHERPA.AI
Toma la delantera en tu sector y sé más competitivo.
Escríbenos y hablemos de Inteligencia Artificial, Privacy-Preserving, para escalar tu negocio.

QUOTES DE
NUESTRO EQUIPO
Hemos alcanzado los niveles más altos en la implementación de algoritmos para la plataforma de Inteligencia Artificial con privacidad de datos de Sherpa.ai, con las metodologías más avanzadas de la matemática aplicada
Enrique Zuazua, Ph.D.
Senior Associate Researcher in Algorithms of Sherpa.ai
Sherpa está liderando el modo en el que se construirán las soluciones de inteligencia artificial, preservando la privacidad del usuario en todas sus formas
Tom Gruber
Senior Advisor in AI of Sherpa.ai