Prepare for the A Level Computer Science OCR Exam with engaging quizzes, detailed explanations, and effective study tips. Maximize your readiness and boost your confidence for exam day!

Practice this question and more.


How is a character defined in programming?

  1. A sequence of characters

  2. A single alphanumeric character

  3. A number with a decimal

  4. A logical TRUE or FALSE value

The correct answer is: A single alphanumeric character

In programming, a character is fundamentally defined as a single alphanumeric character. This encompasses letters, numbers, punctuation, and symbols that can be represented in a computer. Each character has a specific code in character encoding systems, such as ASCII or Unicode, which allows for standardized communication and processing of text in programming languages. For instance, in most programming contexts, a character could include alphabetic characters such as 'A', 'b', or numeric characters like '1'. It's essential for programmers to understand that when we talk about a character, we're referring specifically to one individual unit of text data, rather than a collection or sequence of characters that would form a string. This clear distinction is critical for data types in programming. Other options mention concepts that go beyond the definition of a single character; a sequence of characters refers to a string, a number with a decimal represents a floating-point value, and a logical value represents true or false states—none of which aligns with the singular concept of a character.