Phil Billingham: Artificial – certainly. Intelligence? Not (quite) yet
Speaking personally, the least favourite part of any holiday is the car hire bit. Usually the hard part with car hire is all the stupid paperwork and ‘gotcha’ clauses.
This time on a recent holiday it was the car. Brand new, with all the toys. Including a very enthusiastic ‘steering assistance’ function, that generated a very robust twitch of the wheel to compensate’ if it felt you had moved over a white line on the road.
Neither the manual, nor Google, helped us in switching it off. So we put up with it at a low but persistent level of irritation.
That is, until up in the Pyrenees, on a winding mountain road. A few thousand feet up from the valley floor, we were confronted by a white BMW on our side of the road.
I edged a little closer to the drop, to give space to get around him. The system took exception to this, and tried – very firmly – to steer us back in the path of the errant driver.
I caught it in time, but mere annoyance had now moved into actually being put into danger. Not great, not great at all.
On my return to the UK I looked up what is going wrong with AI – what are we not always hearing?
The stories – some funny – seem to fall into 4 categories:
- Areas where the technology cannot – yet – do what it wants to do. We used to call this ‘vapourware’, and we can be sure that whatever the current position, the systems will soon be able to deliver this stuff. And driverless cars will probably fall into this category, despite trying to kill me…
- Areas where nobody has thought through the scenario properly before writing the programme. The case of the camera at a football match following an official's bald head, rather than the ball, probably fits into this category.
- The third category is where the software reflects the designer's biases. So facial recognition software targeting black faces for crimes, and a gender identification App identifying people called Dr as being male….It would be nice if these were corrected, and corrected soon. But one wonders about where the software is being written, the local cultures there, and what controls and tests are put in place?
- It’s the last category where there may be issues. This is where the system has been allowed to make things up and present them as fact.
These examples range from made up court cases, cited as precedent, through to allegations of criminal damage against a basketball player.
The one that really bothers me is the case where AI falsely accused a university professor of sexual assault. That was further reported elsewhere and is now embedded on the Internet as fact, and crops up on searches and thus references. That kills careers, and who do you sue?
Returning to our world, AI is the latest in a long line of ‘things’ that were said to replace advisers.
At the current level of competence, I can see AI making providers more efficient – providing factual information and data.
I can see it helping Paraplanners produce more consistent research on how to implement advice.
Unfortunately, I can also see it being used to produce ever more believable scams.
I just don’t see it replacing the intuition and flexibility and connection the true adviser brings to their client relationships….yet.
Phil Billingham FPFS CFP Chartered Financial Planner, Chartered Fellow (Financial Planning) is a Financial Planner and a director of Perceptive Planning, a Chartered Financial Planning firm based in London and Essex. https://www.perceptiveplanning.co.uk/
Biography: Phil joined the profession in 1982 and is a past director of the Institute of Financial Planning (IFP) which merged with the CISI in 2015. He is a past member of the Financial Planning Standards Board (FPSB) Regulatory Advisory Panel. He is a specialist in helping advisers cope with regulatory change and has worked with advisers, planners and regulators in the UK, Europe, USA, Canada, South Africa and Australia. He writes this column regularly for Financial Planning Today.