Financial advisors are still behind the curve on generative AI.

We saw this firsthand at the 2024 Invest in Women Conference during a panel on the use of generative AI tools. The enthusiasm of Dr. Sindhu Joseph, CEO of CogniCor, and Neva DePalma, General Counsel at Smarsh, was at stark odds with a palpable hesitation from the advisors in the audience. A hands-up poll showed most of the attendees had not yet explored AI tools in their own practice. 

The reluctance wasn’t unfounded. DePalma’s insights highlighted the industry’s regulatory tightrope, underscoring the need for a measured approach to AI adoption. Advisors are rightfully protective of their clients’ sensitive data. But the advisors were curious enough about AI to fill the seats at the last panel discussion on the last day of the conference.

Our agency is all-in on AI to support the integrated communications of our advisor clients. We host multiple training sessions every week with both in-house experts and guests. This is all to say that we have relevant, hands-on experience in this realm. We wanted to share some of the sharp recommendations from the AI panel along with our own guidance.

how to practice

The biggest impression we got from the crowd at Invest In Women is that advisors want to try this technology, but they’re not exactly sure how. DePalma spoke to advisors’ need to safeguard their clients and manage risk. For instance, using an AI chatbot to generate a blog post carries far less risk than, say, feeding sensitive client data directly into a free version of ChatGPT. (We actually used AI to create the article you’re reading now, but not how you might think. More on that in a bit.)

Your choice of tools matters, too. At our agency we use enterprise-grade AI tools that we vetted for security. We developed our own guardrails and best practices. By contrast, the chatbots you can use for “free” tend to give no promises of confidentiality, and use less sophisticated AI models. As usual, if you aren’t paying for the product, you, and the information you supply, probably ARE the product.

So where should you apply this technology? Look for the most complicated and time-consuming tasks in your day-to-day work. For example, you could look for ways to shave time from the process of client onboarding by using AI tools to capture transcripts or make more personalized deliverables.

Whatever you choose to do, we recommend you take a measured approach. AI tools can produce something that looks impressive at a glance, but repeated use will reveal telltale patterns and gaps in understanding that you may need to correct. OpenAI, for instance, loves to bombard its writing with flowery over-description and grandiose, dependent clauses.

TALKING TO COMPLIANCE

You know what you want to practice, but how do you get permission? Some of the panel attendees said they would like to incorporate AI into their practice, but felt that their compliance departments would shut down any attempts. We don’t think that advisors and compliance professionals should be in conflict with each other. Good compliance work should be a conversation. So, have a conversation.

First, recognize that your compliance team wouldn’t be doing their jobs if they didn’t evaluate an emerging technology against the integrity of your business and the safety of your clients. That said, approaching a compliance officer for a yes-or-no answer right away will probably get you shut down.

Instead, give the technology a thorough evaluation of your own. Demonstrate exactly where it would fit in your day-to-day work, what data it would touch, and how you would use it. Purpose-built or enterprise tools will probably be an easier lift than an off-the-shelf chatbot. Talk to other advisors who use what you want to use. The more you can make a case for targeted and responsible use of AI in your work, the more your compliance professionals will be willing to work with you.

BONUS: HOW WE WROTE THIS

We mentioned that we wrote this article with AI assistance. But the process was human-led from start to finish. If we had just asked a chatbot to write an article about AI adoption, we would likely have received a poorly-written piece of work that was off-topic, stilted, and short on real answers. We started with the notes we took from our live attendance at Invest in Women, then fed them into our AI tools to develop an outline. We defined our intended audience, and created a structure, and then made heavy edits to make sure the voice and takeaways were uniquely ours.

If you’re curious about incorporating this technology into your own work, reach out to us. We’re always happy to compare notes.