Imagine covering elections in real time with a "second head" that helps you connect threads, summarize speeches and spot fake accounts. That's the picture Walter Fernandez, CNA's editor-in-chief, paints about how artificial intelligence is no longer an experiment but a tool integrated into the newsroom. [The full interview is on OpenAI].(https://openai.com/index/cna-walter-fernandez) (openai.com)
Qué hizo CNA y por qué importa
CNA started experimenting with AI in 2019 and, since then, decided to move fast but with clear rules: no cloned voices and no AI-generated footage in news coverage. The strategy was pragmatic: be early without being reckless. (openai.com)
The reason this matters is simple. CNA is a broadcaster with global reach that reaches millions of homes and devices; when a newsroom like that integrates AI thoughtfully, it changes how information is produced and validated. Can you imagine the scale of impact if tools fail or are used without controls? Exactly — that's why the rules matter. (openai.com)
Un ejemplo concreto: elecciones en Singapur
In the recent coverage of Singapore's general election, CNA used ChatGPT
as a kind of second brain: internal GPTs with verified information to provide context and reasoning models to analyze social media campaigns. One of those models detected a link between suspicious accounts that had changed their names, a finding the team hadn't expected and that helped uncover hidden connections in real time. Isn't that the kind of discovery journalism needs today? (openai.com)
Cómo cambió la rutina y la cultura en la redacción
Adoption wasn't magical: they started by asking journalists what hurt the most. The answer: covering long parliamentary sessions. The solution was to build "Parliament AI": face recognition for more than 90 parliamentarians, automatic transcription and searchable summaries. When reporters saw real improvements in their work, acceptance grew and initiatives multiplied. (openai.com)
Today the team already has more than twenty custom GPTs, including a "Newsroom Buddy" for brainstorming and style checks. They also rolled out hundreds of enterprise licenses so the whole organization can access the tools and train internally. Those numbers show it wasn't an isolated pilot, but a transformation at scale. (openai.com)
Riesgos, límites y buenas prácticas que aplicaron
-
Prioritize public service over technological showmanship. Not everything AI can do should be done in publication.
-
Implement processes with
human-in-the-loop
: mandatory human verification for sensitive pieces. -
Create internal policies and governance before scaling use: CNA spent time drafting guides and setting cross-cutting controls.
In the team's own words, AI is a tool to fulfill the mission of public-service journalism, not an end in itself.
Lecciones prácticas para otras redacciones y equipos
-
Start with a real problem, not with the tool. If AI solves a painful task, adoption will be natural.
-
Design clear rules: what to do, what not to do, and who decides in risky cases.
-
Train the whole organization: editors, journalists and audience teams should learn together.
-
Measure results: speed, accuracy and usefulness to the audience, not impressions or volume of content.
Mirando adelante
CNA proposes thinking of AI as a technological backbone that multiplies capabilities, but that requires governance and good practices. The invitation is simple and ambitious: don't wait on the sidelines, but don't be reckless either. Ready to build newsrooms where AI boosts quality rather than replacing it? CNA's experience shows it's possible, with organization, rules and a focus on public usefulness.