Confident security, ‘signal for AI’, comes out of stealth with Streth 4.2m

by SkillAiNest

Since consumers, businesses and governments are pursuing cheap, fast, and seemingly magical AI tools promise, a question keeps on this path: How do I keep my data private?

Tech companies such as openness, anthropic, Z, Google, and others are quietly maintaining user data to improve or monitor their model, even in some enterprise contexts where companies assume that their information is beyond limits. For companies building on highly organized industries or Frontier, this gray area can be a deal breaker. There is a concern where the data goes, who can see it, and how it can be used, they are slowing down the adoption of AI in areas like health care, finance and government.

Enter San Francisco -based Startup Confident securityWhich aims to be a “signal for AI”. The company’s product, Confride C, is an end -to -end encryption tool that wrapped around basic models, guarantees that the model provider or by a third party, even indications and metadata for AI training cannot be stored, seen or used.

“The second that you abandon your data to someone else, you must have reduced your privacy,” Jonathan Mortenson, the founder and CEO of confident security, told the Tech Crunch. “And the goal of our products is to remove this trade.”

On Thursday, Dysable, South Park Commons, formerly, and Sex, with a $ 4.2 million financing in the seed fund, came out of the security stealth, with Tech Crunch specially learned. The company seeks to serve as a brave vendor between AI vendors and their customers – such as hypersonalists, governments and businesses.

Even AI companies can see the value of the entrepreneurial security tool to the enterprise client as a way to unlock the market, Mortenson said. He added that the Config CEC is also suitable for new AI browsers hitting the market, such as the recently released Pulixie’s recently released, to guarantee customers that their sensitive data is not being saved on the server, which the company or the bad actors can access, or use their work to train them.

Apple’s private cloud computing (PCC) architecture has been modeling Confirm, which Mortenson says “is better than 10x 10x in terms of guaranteeing that Apple cannot see your data” when it operates some AI works in the cloud.

Taxkarnch event

San Francisco
|
October 27-29, 2025

Like Apple’s PCC, the confident security system works by anonymously by encryption and rooting the services such as cloud flair or rapid services, so servers never see the original source or content. Next, it uses advanced encryption, which allows for declaring only in harsh situations.

“So you can say that you are just allowed to discontinue it if you are not logging in data, and you will not use it for training, and you will not let anyone see it,” Mortenson said.

Finally, the software running the AI diagnosis is publicly logged in and is open to review so that experts can confirm its guarantees.

“Confident security is ahead of curves to acknowledge that AI’s future depends on the confidence made in the infrastructure,” said Dysable partner Jess Jess Liu said in a statement. “Without such a solution, many businesses cannot move forward with AI.”

It is now the early days for the year -old company, but Mortenson said that the Conf CEC has been tested, audited externally, and is ready for production. The team is interacting with banks, browsers and search engines, in addition to other potential customers, to add confusion to their infrastructure piles.

“You bring Ai, we bring confidentiality,” said Mortenson.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro