Is your AI giving you censored information?

Artificial Intelligence apps are proving to be a help and a hinderance. While they may offer support in checking the grammar of your essay or creating fun images. But they are also making it more difficult to protect intellectual property, or even in providing accurate, unbiased information. Google’s AI, Gemini, for example, has a safety / censorship module, when it encounters information, it deems “sensitive” and that may compromise its corporate bottom line. How does it fit in with the notorious Project Nimbus?

Reza Omar, the Strategic Research Director at Citizen Surveys, explores the realm of AI in a discussion with Radio 786.

 

Click here to listen to the full discussion