As products continue to integrate AI features and functionality directly into them, what are the risks of intentional biases in what they produce?
Why would someone produce an AI Tool with an intentional bias?
Money.
It’s going to be very tempting for database platform companies, as well as ancillary software products that will have AI integration to have a bias for other products from the same company.
If you are DB Platform Company A, it is to your advantage to push people to spend more money with you. Platform Company A will also get an advantage if they can get you to spend more money with them for other products.
This could be as simple as proposing solutions within the software that require an additional investment in other tools or enhanced versions of the product. If the product is a cloud offering, the AI might offer solutions that increase cloud spending.
The easiest way to introduce this bias may be to have the tool’s training data come from the vendor. The vendor might not include training data from places like Reddit or the wider Internet. They could use wider sources, but then they would also have their own data that is used in training and skews towards more advanced features.
This bias must be subtle because overt advertising would be very noticeable. So, if Database Software Company A is going to do this, it will have to go with the soft sell.
I think we are going to end up seeing it; the pressure from the marketing department and the desire to produce demonstratable ROI for an AI investment is going to outweigh the advantages of not doing it.
This may also create some opportunities for people in the market as well. Thoughts on that are coming.