The Security Considerations of Artificial Intelligence
Two Uses to Avoid
For two reasons we strongly discourage others from using AI for any purpose where disclosure of sensitive information is possible. AI currently requires that the data be uploaded to a processor’s cloud where it is extensively analyzed by the processor. That’s problematic. Worse, uploaded material becomes feed stock for the “large language model” databases that drive generative AI model’s “learning” for future processing. So, for example, using AI to customize the creation of a new client rep letter permanently exposes all of the tradeoffs and data that influence the content of the letter. ALL of them.
TFT routinely asks its own law firms to certify that no matters in which we’re engaged are exposed to AI or large language machine learning systems. We don’t want our materials to be raw materials for others.
We use select artificial intelligence engines for research, limited amounts of translations of computer code, and our customers’ as a strictly geo-fenced and very helpful tool for their service questions.
Recommendation
We recommend firms looking for document assembly/automation solutions seriously consider whether they want their documents shared into unknown clouds for unknowable purposes, forever. That doesn’t happen with the programs offered by TFT. Our programs do not share anything with anyone, not even us.
Apple’s Potentially Better Approach
Apple’s recent announcement of locally processed AI is a potential game changer if it can follow through. It seems to eliminate the need to push AI inputs into a cloud where it becomes research material for large language model development. Our techies describe Apple’s development as “hugely positive” for reducing security risks, IF it’s true.
First published in our Word Warrior newsletter concentrating on profession data security issues.