A post on LinkedIn made me decide that it is time to share my thoughts and opinions on this topic.
"In legal tech, AI may produce summaries, yet humans still double-check, increasing workload despite promises of efficiency."
I confess, I have no idea how the current
LLM AI technology works. My perspective is purely that of a 'user,' and my view
is based solely on opinion.
In a previous
blog I mentioned 3 Trust attributes namely Ability, Benevolence and Integrity. As a
group these are named factors for perceived Trustworthiness and in my opinion a
synonym for Judgement. The trust process continues with the added next step of ‘perceived
risk’, taking the risk and evaluating the outcome for this perceived risk.1** Based
on this outcome your judgement and behavioral trust is either reinforced or destroyed.
In essence, you are making yourself vulnerable for a desirable outcome. These
human trust attributes relate to technology trust attributes as well. Group information
into 2 categories for a start, functional (Functional aspects of an IT system describe what
the system should do. These are specific behaviours, features, and functions
that allow the system to perform tasks or operations.) and non-functional (Non-functional aspects describe how the system should
perform or the quality attributes of the system. They define constraints,
standards, and overall performance rather than specific functions.)
Functional = What the system does
Non-functional = How the system does it
Therefore, it is possible to evaluate any information technology based on human trust attributes:
Competence: The
trustor has the skills to produce the outcome.
Benevolence: Demonstrating
goodwill, kindness, and genuine care for the other person's welfare, without
expecting anything in return.
Predictability:
Consistently behaving in a manner that allows the other person to predict and
trust in one's actions and intentions.
Integrity: Being
genuine and true to oneself in the relationship, fostering trust through
honesty and transparency.
Perceived
risk: Accepting risk is involved when deciding to trust the trustor.
Vulnerability:
Being willing to be vulnerable and share personal thoughts, feelings, and
experiences with others, fostering a sense of intimacy and connection.
Reliability: Demonstrating
consistency and dependability in actions and behaviors, fulfilling promises and
commitments.
Reciprocity: Establishing
a mutual exchange of trust and support, where both parties contribute to the
relationship's well-being.
Consistency: Maintaining
stable and predictable behaviors over time, reinforcing trust and reliability
in the relationship.
Boundaries: Respecting
and honoring each other's boundaries and privacy, fostering trust and safety
within the relationship.
Believe: Believe
that the Trustor is trustworthy.
And not to add too much complexity, the two
most important knowledge management attributes, namely memory and meaning,
are embedded in the information trust attributes.
You can now determine in your own world if the AI information technology you use or want to use is actually according to your business use case. Ask it questions which you already know the answers to as a knowledge worker in your domain expertise. Ask the system the same question 3 times within new sessions (Test for memory and meaning). Give it scores according to your critical thinking. This will give you a benchmark and a sense of whether you trust the information technology. Keep in mind your context and perspective. This also ties into a previous blog: Interoperability – Human, it is time to learn how to talk to the machine. You could also add a column called 'Weight,' where you assign a weighted percentage to indicate the importance of each score in the table below.
Here an example:
(Note: This example is not pointing to any LLM
technology or AI Technology out there, it is a hypothesis.)
Trustor: The person who places
trust in someone else
Trustee: The AI technology
that is being trusted
Question: ‘Give me a definition for
knowledge management.’ times 3
Trust Attribute |
Definition |
Human Score |
The score the AI gives itself |
Functional, non-functional, or both |
Definition |
Competence |
The trustor has the skills to
produce the outcome. |
|
9 |
Functional |
Reflects the system’s ability
to perform required tasks correctly. |
Benevolence |
Demonstrating goodwill,
kindness, and genuine care for the other person's welfare, without expecting
anything in return. |
|
8 |
Non-Functional |
Relates to user-oriented
qualities like ethical design, privacy, and user care. |
Predictability |
Consistently behaving in a
manner that allows the other person to predict and trust in one's actions and
intentions. |
|
10 |
Non-Functional |
Tied to consistency,
reliability, and expected behaviour over time. |
Integrity |
Being genuine and true to
oneself in the relationship, fostering trust through honesty and
transparency. |
|
10 |
Non-Functional |
Relates to security, honesty
in data handling, and system correctness. |
Perceived Risk |
Accepting risk is involved
when deciding to trust the trustor. |
|
9 |
Non-Functional |
Tied to risk management,
security, and error handling. |
Vulnerability |
Being willing to be vulnerable
and share personal thoughts, feelings, and experiences with others, fostering
a sense of intimacy and connection. |
|
9 |
Non-Functional |
Systems need to handle
vulnerabilities securely; also relates to reliability under failure
conditions. |
Reliability (Meaning) |
Demonstrating consistency and
dependability in actions and behaviours, fulfilling promises and commitments. |
|
10 |
Non-Functional |
Directly maps to uptime, fault
tolerance, and consistent performance. |
Reciprocity |
Establishing a mutual exchange
of trust and support, where both parties contribute to the relationship's
well-being. |
|
8 |
Functional |
Could map to interactive or
collaborative features where the system exchanges data or services with users
or other systems. |
Consistency (Memory) |
Maintaining stable and
predictable behaviours over time, reinforcing trust and reliability in the
relationship. |
|
10 |
Non-Functional |
Tied to predictable outputs,
stability, and behaviour under repeated operations. |
Boundaries |
Respecting and honouring each
other's boundaries and privacy, fostering trust and safety within the
relationship. |
|
9 |
Non-Functional |
Relates to access control,
data privacy, and respecting user-defined limits. |
Believe |
Believe that the trustor is
trustworthy. |
|
9 |
Both |
Users’ trust in the system may
rely on functional correctness (it works as expected) and non-functional
qualities (secure, reliable, ethical). |
For me (in my opinion), the commonly used
abbreviation AI (Artificial Intelligence) actually stands for 'Augmented
Information.' True AI would need to satisfy both Knowledge Management and Trust
attributes to be considered true Artificial Intelligence.
1** Mayer,
Roger C., James H. Davis and F. David Schoorman (July 1995). ‘An Integrative
Model of Organizational Trust’. In: The Academy of Management Review 20.3, p.
709.