Advanced artificial chatbots, such as ChatGPT, Gemini, and Claude, have exploded in popularity in the last two years, boasting millions of users that seek out these tools for answers on-demand. The rise has been so rapid that ChatGPT’s creator, Open AI, hopes to have approximately 1 billion users by the end of 2026.
Undeniably, chatbots have transformed the way we work and gather information; however, with this swift adoption, the legal system has yet to catch up. Currently, there are no federal laws regulating the use of information being inputted into A.I chatbot tools.
This poses quite the risk for those using chatbots to assist with legal matters that require discretion. In the case of estate planning, the use of chatbots becomes especially complicated, as sensitive information about yourself or a family member enters an unregulated portal where future intentions are still unknown.
While it’s quite easy to upload your trust, powers of attorney, wills and health directives to a chatbot and ask for a comprehensive analysis, this could lead to data breaches, stolen identities and your information being used to answer prompts from other users across the world.
Currently, A.I. chatbots are in their “Wild West” days, much like the internet in the early 2000s. As such, governing bodies are still working out how to protect your information, and safeguard users from exposing themselves to harm.
A.I. is not inherently harmful; rather, the problem is that we lack sufficient safeguards to ensure personal data is stored and used securely. When providing financial material, family details and social security numbers to an attorney, there is a clear expectation of how this confidential information will be protected based on the ethical guidelines that govern attorneys. This requirement does not yet exist for A.I. chatbots, nor do companies have meaningful incentives to create one. Overall, chatbots are not designed to be confidants.
Despite the lack of laws surrounding how chatbots can use the information we upload, there are ways to protect ourselves from potential dangers.
Above all, don’t input personal information into a chatbot. Avoid names, birthdays, phone numbers, social security numbers and medical information. If these are necessary to elicit an answer from the tool, use fake confidential details such as pseudonyms instead.
For delicate matters like estate planning, chatbots should be used sparingly, apart from the initial information gathering stages. If you do utilize a chatbot to evaluate your estate plan, focus on the provisions themselves, not sensitive details like your children’s social security numbers. Please be aware that sensitive provisions such as a no-contest clause or a disinheritance clause should remain confidential.
Thankfully, legal-specific chatbots have been created to provide heightened security protections. Lexis AI recently introduced a tool called Protégé General AI, which offers privacy protection through a fully encrypted environment.
Chatbots are a great resource for personal use, but they are not intended to keep your data safe – that’s simply not its job. There is no guarantee that uploaded information won’t be seen by others, even if they aren’t attempting to access your data. Chatbot responses are highly complex systems that are informed by millions of data sources.
When handling estate planning, it’s essential to safeguard your personal information from falling into the wrong hands. If in doubt, consult an attorney who can help you protect your data while still making effective use of tools that clarify the complexities of wills, trusts, and other essential documents.