
Text choice in IB English Language and Literature looks like a minor administrative decision right up until it doesn’t. A teacher issues a reading list, a student accepts it, maybe argues for one favorite, and the matter seems settled. What that routine rarely surfaces is whether those works, taken together, can actually perform across the full range of what assessment requires.
Paper 2 asks for sustained comparison across at least two works. The Individual Oral (IO) requires a global issue traced through both a literary and a non-literary text. The Higher Level (HL) Essay requires a passage that carries an independent critical argument without any external prompt or scaffold. A text can generate strong seminar discussion and still leave a real structural gap in every one of those components. The reading list is not the portfolio—it’s just the starting material.
Four Criteria for Evaluating Any Single Text
Most students evaluate texts one at a time, and that’s precisely when the problem becomes invisible. A work can be analytically rich, thematically layered, and genuinely interesting in isolation, and still underperform when it needs to compare, pair, or carry a component that nothing else in the list can support. The portfolio-level cost only shows up later. Four criteria help surface it earlier.
Analytical richness asks whether close attention to language, form, and structure keeps revealing new angles, or whether the text collapses into a single predictable reading. Thematic range asks how many different global-issue lanes the work can credibly sustain. Form and mode contribution asks whether the text type or literary mode adds something structurally new to the list. Comparative generativity asks whether placing the work beside others creates real analytical tension rather than just another variation on the same move.
Score each text 0–2 on: analytical richness, thematic range (global-issue flexibility), form/mode contribution, comparative generativity. Red flags: if comparative generativity = 0, don’t treat it as a Paper 2 anchor; if thematic range = 0, don’t rely on it for prompt resilience. Tie-breakers: when totals are similar, prefer the text that widens portfolio coverage (new form/mode or new global-issue lane) over one that duplicates. Log decisions in one line: Title → total score → biggest strength → biggest limitation → allowed role (Core / Conditional / Drop).
These four criteria map directly onto what the three major components test. Analytical richness is non-negotiable for the HL Essay, where the entire argument rests on close reading of a chosen passage. Thematic range and form and mode contribution matter for the IO, which depends on global issues articulated across literary and non-literary language. Comparative generativity is the backbone of Paper 2, where a text’s value depends on how it behaves beside another work, not on how compelling it reads alone.

Testing Text Pairings for Paper 2
Two texts can share an obvious topic and still produce a weak comparative essay—and that’s the Paper 2 problem most students encounter too late. If both works approach their subject with similar forms, similar registers, and similar structural assumptions, the comparison has nowhere analytically to go. A pairing earns serious preparation time only when it creates genuinely arguable differences in form, structure, or the conditions under which each text was produced—differences that shape how meaning is constructed in each work.
Start by naming at least one precise formal or contextual difference that generates a real comparison. Instead of stopping at ‘both explore identity,’ pin down something arguable about how they do it—a contrast in narrative structure, voice, or production context. Topic matching feels like preparation; it’s mostly pattern recognition at the wrong level of analysis. If you can’t make that formal or contextual difference explicit in your own words, the pairing is probably coasting on shared subject matter rather than the comparative thinking Paper 2 actually rewards.
Then stress-test the pair. Reframe the focus through at least three different hypothetical prompt angles and check whether both texts stay genuinely usable each time. If the pairing only holds under one specific formulation, it’s fragile under exam conditions. Finally, check that the comparison runs in both directions: if one work carries all the analytical weight while the other mostly echoes it, the weaker partner isn’t a reliable anchor text when the stakes are real.
IO Text Selection — The Global Issue Pairing Test
Students preparing the individual oral often discover too late that two works sharing a topic don’t automatically speak to each other at the level of technique. That’s exactly the gap the IO’s structure exposes. The component asks students to trace a global issue through the specific language, form, and choices of both a literary and a non-literary text. When text selection wasn’t designed with that pairing in mind, analysis tends to default to subject-matter overlap rather than the technical articulation the task actually requires.
Before recruiting a literary text as an IO candidate, identify the global issues it truly sustains rather than briefly touches. Then confirm that at least one non-literary text on your list treats the same issue with comparable depth. Finally, test the direction of the claim: the global issue should show up in how both texts use language, structure, or visuals—not just in what they’re about. An IO built only on subject-matter overlap tends to demonstrate what a student has noticed rather than what they can argue. That’s a different and harder gap to close when the HL Essay arrives.
HL Essay Text Criteria and Portfolio-Level Assessment
The HL Essay shifts responsibility for focus almost entirely onto the student. There’s no guiding prompt, no externally set text—you select both the work and the specific passage, then generate your own research question. That design is notably unforgiving about text quality: there’s nothing external to compensate for a thin or inflexible work. Where Paper 2 and the IO have structural scaffolding that can partially sustain a weaker choice, the HL Essay doesn’t.
Two quick tests protect against that problem. First, identify a concrete passage of appropriate length and ask whether close attention to its language, structure, and context would support multiple layers of analysis—not just paraphrase or theme-spotting. Second, check that you can phrase an arguable, specific research focus arising from that passage rather than a broad theme label. If the text keeps pushing you toward general ideas instead of precise critical questions, it’s not a strong HL Essay candidate.
With individual works evaluated, scan the portfolio as a whole. Look for form coverage—more than one major text type or mode—thematic spread that doesn’t cluster around a single narrow global-issue category, and comparative density, meaning every text forms at least one analytically productive pairing. When gaps appear, address single-point-of-failure weaknesses first: cover any missing form or mode lane, and replace any work that can’t meaningfully compare with anything else. Then reduce over-reliance on any one global-issue category. Only after those fixes are in place is it worth refining within areas already covered—and even then, don’t replace a text unless it clearly improves at least two of the four criteria.
Using the Portfolio Framework Across the Program
A reading list issued in the first week of a course was designed to be teachable, not assessment-resilient. Evaluating texts against analytical richness, thematic range, form and mode contribution, and comparative generativity, then stress-testing pairings for Paper 2, the IO, and the HL Essay, converts that starting point into something more deliberate. The same four tests apply whenever a new text enters the picture.
A portfolio doesn’t need to be perfect. It needs to be flexible enough that no single prompt, pairing, or research question can break it.