BLOG POST

OPINION
 
APR 2025
by DIANA MOSQUERA

Underlying every generative AI trend

Over the past few years, several companies have launched artificial intelligence (AI)-based tools that, under the guise of entertainment, utility or viral trend, have collected huge amounts of personal data. What at first seems like a game or a curiosity-like seeing yourself aged, transformed into anime or a fantasy illustration-is actually part of a broader strategy to collect faces, metadata and digital habits. This massive collection has profound implications in terms of privacy, copyright and the environment.
FaceApp (2017)

FaceApp, launched in 2017, is an application developed by FaceApp Technology Limited, a company registered in Cyprus. Its founder is Yaroslav Goncharov, a former Microsoft and Yandex engineer [1]. The app gained popularity for allowing users to apply aging, rejuvenating or gender changing filters on their faces. However, in 2019, strong questions arose about its treatment of personal data: the images were processed on external servers (Google Cloud and AWS), and although the company claimed that they were deleted within 48 hours, there was no mechanism to guarantee it [2].

Since its inception, FaceApp is estimated to have collected more than 150 million photos, and has admitted to using them to train facial recognition algorithms, develop new features, and refine its filters. Beyond these stated uses, the potential of these databases is immense: they could feed surveillance technologies, advertising, creating fictitious faces or even be exploited by government entities or malicious actors. In addition, the app also collects metadata such as geographic location and device model [2]. Other apps such as Aging Booth, FaceLab or YouCam Makeup have also collected millions of images through functions that appear to be purely recreational. Through facial simulations or aesthetic filters, users unwittingly deliver images to their devices that are not only recreational in nature, but are also used as a means to capture images of the user's face [3].


Clearview AI (2020)

Clearview AI is a US company whose facial recognition technology has been used by law enforcement and government agencies. Its database contains more than 30 billion images extracted from social networks and public sites without the consent of individuals. This practice has earned it multiple sanctions, such as the one imposed by the Dutch Data Protection Authority, which considered its activity illegal under the European Union's General Data Protection Regulation (GDPR) [3].

OpenAI and the Ghibli Trend (2025)

Recently, millions of people have uploaded their photos to platforms such as ChatGPT to transform them in the Studio Ghibli style. This viral trend has generated a new cycle of massive data collection, while replicating artistic styles without the recognition or consent of the original authors [5][6][7][8]. The use of Studio Ghibli's visual style poses a serious copyright conflict. To emulate the Japanese studio's aesthetics, soft colors, dreamlike landscapes and expressive strokes, AI models were trained with hundreds or thousands of images, without any agreement with their creators. This kind of technological appropriation devalues the artistic work accumulated over decades.

Moreover, these images are not only used for aesthetic purposes. The data collected allow the development of facial recognition systems, algorithms for generating realistic images, or new forms of personalization in advertising, entertainment and even surveillance [5].

What apps also collect

It's not just faces that are delivered: images capture our homes, pets, our privacy, photos of minors and other elements of the environment. Often, users unknowingly grant broad and perpetual licenses on their images, opening the door to unforeseen or malicious uses. This is compounded by the lack of regulation in many regions, especially in Latin America. Companies change their privacy policies without warning, enabling more aggressive uses of our data. Cases like:

* Facebook, which handed over private conversations to the U.S. government in a court case.
* Twitter/X (2023), which updated its policy to allow the use of public and private data for AI training.
* Experian (2020), which amended its terms to exclude legal claims without notice.
* GoodRx and BetterHelp (2023), which shared medical data for advertising purposes.
* Cerebral and Monument (2024), which violated their own policies by sharing sensitive information with third parties.

These examples show how privacy is quietly being eroded.

The environmental impact of generative AI

Behind every AI-driven viral trend is an environmental cost that is rarely mentioned. Models such as Stable Diffusion XL, GPT-4 or DALL-E require enormous amounts of energy. It is estimated that a single imaging session can emit as much CO₂ as a short car ride [9]. Training an advanced model can exceed 500 tons of CO₂, equivalent to the annual emissions of 100 cars [10]. In addition, data centers use water to cool servers. Microsoft reported a 34% increase in its water consumption in 2022, while Google also recorded notable increases [11]. An analysis revealed that a conversation of 20 to 50 prompts with ChatGPT can consume the equivalent of half a liter of water [11].

The accumulation of biometric data, the unauthorized use of artistic works and environmental damage remind us that what looks like a simple filter or a funny image has real consequences. It matters a lot who we give our data to, and what they do with it. Reclaiming our privacy is not just an individual act, but a collective effort that requires clear regulations, informed decisions and a shared ethical commitment. In the digital era, the loss of privacy is directly related to the weakening of democracy [12]; therefore, protecting personal data is essential to safeguard our freedoms, both individual and collective. Only in this way will we be able to build a true 'herd immunity' against technological exploitation.

References
[1] Wikipedia. (s.f.). FaceApp. https://es.wikipedia.org/wiki/FaceApp
[2] BBC Mundo. (2019, 17 julio). FaceApp: qué hay detrás de la aplicación que transforma tu rostro. https://www.bbc.com/mundo/noticias-49012256
[3] Autoridad de Protección de Datos de los Países Bajos. (s.f.). https://autoriteitpersoonsgegevens.nl
[4] Vidnoz. (s.f.). Cómo envejecer rostros online con IA. https://es.vidnoz.com/inteligencia-artificial/envejecer-rostros-online-gratis.html
[5] El País. (2025, 2 abril). Por qué la locura de ChatGPT con imágenes estilo Ghibli no es solo un meme. https://elpais.com/tecnologia/2025-04-02
[6] La Nación. (s.f.). Cómo usar ChatGPT para hacer imágenes al estilo Ghibli. https://www.lanacion.com.ar/tecnologia
[7] CNN en Español. (2025, 28 marzo). Studio Ghibli IA e imágenes virales. https://cnnespanol.cnn.com/2025/03/28
[8] OpenAI. (2025). Introducing 4o Image Generation. https://openai.com/index/introducing-4o-image-generation
[9] SMOWL. (s.f.). Impacto ambiental de la IA. https://smowl.net/es/blog/impacto-ambiental-ia
[10] GTA Ambiental. (s.f.). La IA y el medioambiente. https://gtaambiental.com/inteligencia-artificial/
[11] La Vanguardia. (2023, 15 abril). ChatGPT y su consumo de agua. https://www.lavanguardia.com/vida/20230415
[12] Ethic. (s.f.). No es coincidencia que esta pérdida de democracia se dé al mismo tiempo que el auge de la tecnología digital. https://ethic.es/entrevistas/entrevista-carissa
Return ⇐

Get in touch

We would love to hear from you and keep you updated on our latest ideas and projects.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Designed and Developed by Diversa
© 2025