There is a lot of talk these days about encouraging creators of AI tools to tag the work they produce so people will know that it is fake; but the idea that this would prevent the spread of misinformation is preposterous, as there will always be other sources of tools which don’t advertise their fakery.
The correct solution is not to securely identify lies, but rather to identify what is reliable. This might be done by a system of signing whatever we distribute (and want others to actually believe) with something like a secure watermark(*) identifying the source. Then anyone who wants reliable info can just ignore anything that is not signed by a source they trust. Some will choose to trust unreliable sources; but I do believe that if some sources are truly scrupulous (much more so than the current mainstream media), then eventually they will come to be recognized as such (and in any case people will at least be sure of the source of whatever they are looking at).
(*) for example, if a message or file is encrypted so as to be unreadable without a widely distributed public key that is permanently associated with the sender, then no-one who does not possess the sender’s private key is capable of sending a message which is decodable to other than nonsense by use of that public key. (In a sense, what earns credibility here is not the sender per se, but rather the public key itself – which might eventually become known as unlocking only things that are true.)
Source: Google unveils invisible ‘watermark’ for AI-generated text