4.10 Explain basic concepts related to artificial intelligence (AI).
📘CompTIA A+ Core 2 (220-1202)
Artificial Intelligence (AI) is powerful, but it has some important limitations that you must know for the exam. These limitations include bias, hallucinations, and accuracy issues. Understanding these helps IT professionals use AI tools safely and effectively.
1. Bias
What it is:
Bias in AI happens when the system gives unfair or skewed results because of the data it was trained on. AI learns from large datasets, and if the data has patterns that favor some outcomes over others, the AI will “learn” that bias.
IT Example:
- In IT security, an AI might scan log files to detect suspicious login attempts. If the AI was trained mostly on data from one type of network, it might incorrectly flag legitimate users from other networks as threats—or miss real threats in networks it hasn’t seen.
- Similarly, AI used for helpdesk ticket prioritization might wrongly classify tickets because it learned from older tickets that overrepresented certain types of problems.
Key Exam Point:
Bias can make AI outputs unreliable, especially in decisions that affect users or systems. IT staff need to review AI recommendations critically.
2. Hallucinations
What it is:
AI hallucinations occur when the AI produces information that is completely made up or incorrect, even though it may sound convincing. This is common in language-based AI systems like chatbots.
IT Example:
- An AI support bot might generate a command to fix a server problem that doesn’t actually exist or could cause issues if executed.
- In documentation generation, an AI might create configuration instructions that are syntactically correct but don’t work on the system.
Key Exam Point:
Hallucinations mean you cannot blindly trust AI outputs. Always verify AI-generated instructions or answers before using them in your IT environment.
3. Accuracy
What it is:
Accuracy refers to how correct or reliable AI outputs are. AI is not perfect; even with good data, it may make mistakes due to complexity, outdated data, or ambiguous inputs.
IT Example:
- An AI malware detection tool might miss new malware variants (false negatives) or flag safe programs as malware (false positives).
- AI tools used for system monitoring might incorrectly predict server downtime if the input data is incomplete.
Key Exam Point:
Accuracy depends on quality data, proper training, and continuous monitoring. IT professionals must understand AI outputs are assistive, not absolute.
Summary Table for the Exam
| Limitation | What it Means | IT Example | Exam Tip |
|---|---|---|---|
| Bias | AI favors some outcomes over others | Security logs misclassified due to training data | Know that biased data → unreliable AI |
| Hallucinations | AI makes up info that is wrong but sounds real | AI bot suggests incorrect server commands | Verify AI outputs before use |
| Accuracy | AI can be wrong even with correct data | Malware scanner misses threats or misflags safe apps | AI outputs are helpful but not perfect |
✅ Exam Tip:
If a question asks about AI limitations, think:
- Bias → unfair/skewed results
- Hallucinations → completely made-up outputs
- Accuracy → can be wrong or unreliable
In IT environments, AI is a tool to assist humans, not replace them. Always verify critical information.
