- Wetransfer users became enraged when it seemed that the latest service conditions showed that their data would be used to train AI models.
- The company moved quickly to assure users that they do not use uploaded content for AI training
- Wetransfer rewrite clause in clear language
The file sharing platform Wetransfer spent a tough day to assure users that after refreshing in terms of service, there is no intention of using any uploaded files to train AI models, it suggested that anything sent by the platform is to make or improve machine -tools.
“The use of a veterinarian, who is buried in the TOS, said that the company” gives the company the right to use data for the purposes of service or new technologies or services, including privacy and cookie policy to improve the performance of the machine learning model that enhances the moderate process of our content. “
This section of the machine learning and the general wide variety of text shows that the veternsifer can do whatever you want with your data, to clarify the qualifier to remove any specific safety measures or suspicion.
Perhaps understanding, many wetransfer users, including many creative professionals, were worried about what it would mean. Many people started posting their plans to go from Vetransifer to other services in the same rig. Others launched that people should encrypt files or go to old school physical delivery methods.
Time to stop the use of @waiter, which has decided from August 8, that you will own anything that will move you by force.July 15, 2025
Wetransfer noted the growing anger around the tongue and ran to try and set it on fire. The company re -written the TOS section and shared a blog that describes the confusion, repeatedly promised that one’s data will not be used, especially for AI models, without their permission.
“With your impressions, we have not understood that it may not be clear that you maintain ownership and control over your content. We have since updated these terms since so that they can be easily understood.” Blog. “We have also removed the mention of machine learning, as it is not something that uses users in relation to content and may have some concerns.”
Whenever giving a standard license to improve wetransfer, the new text leaves the machine learning references, instead focuses on the familiar scope needed to run and improve the platform.
Clear privacy
If it feels like a bit of Dija Woo, the reason is that something similar happened with another file transfer platform, dropbox a year and a half ago. The company’s excellent print change shows that the dropbox is taking the content uploaded by users to train AI models. As a result of public screams, the dropbox apologized for the confusion and fixed the culprit boiler plate.
The fact is that this is once again in such a fashion, the reason is that software is not due to the strange legal language used by companies, but rather because it causes the shock of knees to these companies to protect your information. When this is uncertain, the worst is the worst approach, and companies have to make an extra effort to reduce these tensions.
Sensitivity from creative professionals to the appearance of misuse of data. In an era where tools such as Dale · E -E, Midgorn, and Chat GPT are at stake on the work of artists, writers and musicians. How their creations are used by artists, not to mention the doubts about the use of corporate data, the assurances offered by the veternsifier will probably be something that tech companies would like to keep in place soon, lest they face the wrong of their users.