Artificial Intelligence (AI) is rapidly developing, and marketers are eager to tout the benefits of new AI features incorporated into their products. But as advertising of new AI features becomes more common, legal scrutiny around the substantiation and privacy implications of AI claims is also intensifying, as evidenced by the National Advertising Division’s (NAD) and Children’s Advertising Review Unit’s (CARU) Case #7485 reviewing Dorel Juvenile Group’s advertising for the Maxi-Cosi Sibia Bassinet and Starling Smart Bassinet both featuring the “CryAssist” AI technology.
The Advertising Claims
Dorel’s bassinets at issue use “CryAssist,” a feature marketed as using AI to “translate your little one’s cries, letting you know if they might be sleepy, fussy, gassy, agitated, or hungry.” Other prominent claims included assurances that “everyday conversation [is] kept private,” that “cries and cry data [are] kept anonymous and encrypted on our cloud,” and that all response-based features are “optional, ensuring control is always in your hands.” These claims are appealing to new parents but also raise questions about accuracy, privacy, and compliance with children’s data protection laws.
NAD’s Review of AI Claims
In response to NAD’s inquiry about these claims, Dorel provided substantial evidence to support its claims including peer-reviewed research on the underlying AI, details of the model’s training with real infant cries, and validation studies comparing the AI’s classifications to medical and expert review. The evidence showed the AI could distinguish between different types of cries with about 92% accuracy (consistent with research showing accuracy between 89-94%), and, importantly, that this accuracy was maintained after the technology was integrated into the product. Dorel’s marketing also carefully avoided promising 100% accuracy, using qualifiers such as “might be” throughout the advertising.
In reviewing Dorel’s claims, the NAD applied its well-established standards for advertising substantiation, noting that Advertisers must have a “reasonable basis” for every claim, a standard that depends on the nature of the product, the type of claim, potential consumer harm, and what experts in the field would expect as proof. NAD found specifically that AI-focused claims require robust, product-specific evidence and specifically that “the validity of claims about AI requires a focus on both the pre-deployment training data used to develop and teach the model, the testing and validation of the model to determine how well the model has learned and performs on new data, and verification that the specific AI model performs in the product being sold.”
NAD determined that it is not enough to rely on general AI capabilities or manufacturer promises when making AI claims. Rather, advertisers must demonstrate how the AI performs in the specific context of the product being sold. Applying the above approach to Dorel’s advertising, NAD determined that Dorel’s substantiation was sufficient for the challenged performance claims (as long as the advertising didn’t imply perfect accuracy). NAD also found that the claims about user control and optionality were supported by the product’s design and interface, which allow parents to activate or deactivate CryAssist features and manage consent within the app.
Note on Privacy and Data Security: CARU’s Involvement
Although not the focus of this blog, because Dorel’s products collect and process children’s voice data, CARU also joined the inquiry to assess Dorel’s compliance with COPPA (the Children’s Online Privacy Protection Act). CARU found that Dorel’s privacy policy did not meet all COPPA requirements. Specifically, Dorel’s disclosures functioned as a general privacy notice rather than the targeted, direct notice to parents required under COPPA. Moreover, Dorel did not have a mechanism for obtaining verifiable parental consent before collecting children’s data. CARU concluded that to fully comply, Dorel would need to update its practices to provide direct parental notice and implement a reliable, affirmative parental consent process prior to data collection.
Lessons for Advertisers
NAD’s decision illustrates several lessons for companies looking to make AI-related claims:
- Rigorous, Product-Specific Substantiation is Likely Required. Advertisers must possess a “reasonable basis” for every claim made. A reasonable basis is determined by factors like product category, claim type, potential consumer impact, and industry standards. For AI technology, substantiation must go beyond generic descriptions or theoretical capabilities. Instead, it appears that NAD will expect advertisers to provide evidence that the specific AI model used to make a particular advertising claim performs reliably in the product as sold. Thus, testing should reflect ordinary consumer use of the AI features as incorporated into the product, and results must be a good fit for the specific claims being made.
- Avoid Overpromising. Advertisers should not exaggerate the benefits or accuracy of their AI products or features. AI-driven features often operate probabilistically and are subject to limitations, so claims should use qualified language (e.g., “might be,” “may indicate”) rather than suggesting certainty or infallible results, especially when directed at vulnerable populations such as parents of infants.
- Retain Transparency and Documentation Across the AI Lifecycle. Advertisers should maintain traceable, documented procedures for the various stages of AI development and deployment. It appears that NAD will expect advertisers to be able to explain how their AI models were trained, validated, and calibrated for the actual devices being sold. Documentation of consent processes, privacy safeguards, and operational limitations should therefore be retained.
- Keep Pace with Regulatory Changes. The legal landscape for children’s privacy and AI-focused advertising claims is rapidly evolving. Advertisers should routinely review their privacy and data practices to ensure ongoing compliance, especially if their products are directed to children or use sensitive biometric data.
- Use Extra Caution when Marketing to Vulnerable Populations. When advertising products meant for infants or children (or their parents), extra care is required. NAD and CARU have repeatedly emphasized that advertising should be both truthful and sensitive to the unique needs and vulnerabilities of these populations. This includes avoiding misleading claims, ensuring privacy protections are robust, and making sure parents are fully informed and in control of their children’s data.
NAD’s review of Dorel’s advertising signals that AI-focused claims will be held to relatively high standards for substantiation and transparency. Advertisers cannot rely on hype or generic assurances. Instead, robust evidence, honest communication of limitations, and strict data privacy compliance will be essential. As always, our Faegre Drinker team is available to assist with the review of any AI claims.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.