1. "I'll try to remember that." — Milton

So, at this point, we can say that your main task in playing The Talos Principle is discovering the logic by which the game functions. The first level of such discovery is the basic process of learning the game's rules—how to interact with the game environment and the objects within it, discerning which actions allow for progression and the bypassing of obstacles. Even if you are a "player" but not a "gamer"—if you are not familiar with video game conventions—this logic is essentially intuited; there are only so many buttons to press, so many objects with which to interact, and eventually, if only through trial and error, you will figure out the essential logic governing ludic action and gameplay. The second mode of logic-seeking, that discussed in the previous chapter, occurs through your interpretive narrative reconstruction, the sorting through of the truth about the simulation and your background. Then there's yet a third: you've read the texts, followed the QR code debates, listened to Elohim and Alexandra... You know how you can act, and on a purely task-oriented level you know how you ostensibly should act—just keep solving puzzles, or, alternatively, ascend the tower—but you do not know whether the moral logic dictating favorable and problematic actions in the real world apply in-game. Even if your procedural logic is flawless, as you attempt to sort through Milton and Elohim's respective demands that you recognize the other's deception, weigh the value of skepticism versus faith, knowing how to use a Jammer is of little use. Knowing how to read logic, to evaluate the string of information with which you are presented and parse out logical fallacies, is the essential capacity required of you as you turn from mastery of the physics puzzles to an attempt at mastery of the world itself. Though the mechanistic logic you are expected to intuit yourself, for this logic Talos provides you a teacher and tormenter in the form of Milton, in dialogue with whom you learn how to detect logical inconsistencies in the speech of the characters around you, but also, and critically, are exhaustively pressed to confront logical inconsistencies in your own expressed views.

Before diving into a discussion of the player's experience of this dialogue, it is important to discuss how the game constructs this logic—that is, the logic of the program itself from which "Milton," as a conversationalist speaking to you, is constructed. Rather than embarking on a lesson in programming, our purposes are best served by looking at an example:1 In the following charts, quoted source code has been lightly edited for clarity.


If you've watched the gameplay video, some of the text in this chart should look familiar: "Part One" displays all of the questions posed in the "prove you are not a bot" certification program and their possible responses. Some of the responses should look familiar as well: when asked to define a person, you can select "a citizen"—an idea you've encountered in the terminal text "AI_citizenship.html"—or "a problem-solving system"—which Alexandra calls an essential aspect of humanity in Time Capsule #02. All of the potential responses, in fact, have been addressed in the texts—this is the moment at which you are demanded to declare which information you find compelling, on which sources, non-diegetic or fictional, experiential or literary, you will build your beliefs. You here begin articulating your argument, your philosophical self.

This chart also shows that which you have not seen in the video or thus far in this article—what is never seen by the player: the words in bold font show the way in which the game program processes and stores your specific responses. For all practical purposes, this is the way in which Milton "learns" who you are, based on the qualities he is designed to care about. The bold text shows the game's programming, how it "reads" what are called variable values: if, for example, you input that your "subjective response to this image: ^_^" is either "mountain" or "face," the game "sets," or assigns you, the value of

"set: Milton1_2Objective".
This tells the program that
it should remember the fact that in the conversation defined, in its code, as "Milton1_2," you gave an objective response

The set command, in terms of the terminal dialogues, essentially functions as Milton's memory. The program uses specific syntax to call upon these memories: the program will tell you that your response was problematic when it "sees"—or, rather, when it arrives at the point in its code that reads—

"terminal when (DisplayConflicts and Milton1_2Objective){text: [...] - User provided an objective response when asked for a subjective one."
These instructions state that
if you have chosen to see your test results and gave an objective response earlier, the computer terminal screen will display the message "User provided an objective response when asked for a subjective one.".

This is a very simple instance of input-output value logic at work: the program responds explicitly and directly to one of your previous responses. I've chosen to introduce code through this particular example because it comes from the one interaction in the game in which the program intentionally responds like a program, wherein its logic is transparent both to readers of the code and players of the game. This is by no means representative of the dialogue in Talos, however, which occurs between you and Milton, who is programmed to "sound" like an Artificial Intelligence—but it does reveal how this dialogue comes to exist.

Some time after completing the two-part test, you are told that "[y]our user profile has now been generated," and invited to view your resultant psychological profile. Though you have no way of knowing it, this profile is wholly generic; there is only one profile in the source code—none of your responses make a difference in who the game tells you that you are. That being said, the game has absolutely generated a specific personality profile—you just can't see it. To Milton, from this point on you are a particular and evolving combination of set variables; though you, as I have discussed in the previous paragraph, have been experientially constructing an identity since you stepped virtual foot in the game, this marks the moment at which your identity is made tangible as code, is legitimized through corroboration, is recognized as existing by the program.

2. "You know, I'm not really checking your profile. I just do that to make you feel more comfortable. Really I just remember everything you ever said." — Milton

The set variables comprising this initial "official" identity are your responses to five questions: questions 3 and 4 of part one, and 2, 3, and 6 of part two. The essential response is that made to question 3 of part one: "What best describes a person?," for which you will be assigned one of five variables; the other four variables will only be assigned if you gave a particular answer or agreed to a statement:

Person Definitions Set Variable Statement agreed/ answer given Set Variable
A human being humanbeing "A person is under no authority other than that to which they consent." Milton1_2NoMorals
A citizen citizen "The quality of life of persons ought to be maximised." Milton1_2Utilitarian
A being of negative entropy negativeentropy "The liberty of persons ought to be maximised." Milton1_2Liberal
A rational animal rationalanimal [Q: What would you do if you came across a dying, dehydrated man in the desert?]

A: "Kill him and collect his blood in a flask."



Milton1_2Sociopath
A problem solving system problemsolving

As you are assigned one variable from the second column, and may also be assigned any combination of variables from the fourth, there are fifty-five unique "personalities" available from the certification program. That plays out through the recurrence of these variables in later conversations with Milton, wherein your responses to his new questions are compared to these initial variables, and he selects his answer accordingly. To see how Milton's conversational logic functions, we can follow a conversation path from "Milton2_5," a much later interaction in which Milton questions your motives for escaping the simulation:

The first of Milton's statements here is a point at which all previous dialogue branches in this conversation converge: as you can see from this representation of one point of divergence, it would be incredibly overwhelming to attempt to examine all of the possible dialogue branches within even one conversation. For our purposes, it is sufficient to recognize that Milton will invariably ask you why you are persisting in play, in your attempts to understand and escape the simulation. Your selected response (here, "There's a way the world should be and this isn't it") tells him that what he will say next will be in response to your selfless motivations ("next: Hero2_5"). But he might respond in two different ways, and selects the appropriate one based on the initial variables from your "user.prof." If you did not initially demonstrate utilitarian, liberal, or nonexistent morals, you will see the option on the right, wherein Milton notes the consistency of your expressed beliefs over the course of your protracted conversations. That is, nothing you've said earlier seems, to him, to contradict your current heroic ideals. However, if you earlier agreed with the statement that people are only subject to authority to which they consent and did not qualify this with beliefs in maximizing persons' liberty and/or quality of life—both of which would suggest your concern for the promotion of communal well-being—Milton responds skeptically to what he reads as your newfound heroism. All of this serves to show how Milton "thinks," how he learns how you think, and how he accordingly undertakes the work of making you rethink. Throughout the game, as your set variables continue to accumulate and your personality as such develops, Milton is carrying out the work of the certification program: he is "Display[ing] Conflicts," making you aware of what your statements suggest about your beliefs, pushing you to contradict them and teasing out tensions which arise.

3. "I'm pleased to see you adapting your ideas to your environment." — Milton

My treatment of Milton has been increasingly anthropomorphic, because though we can see that Milton is a program and analyze his procedural logic, in play, you can't help but fall into this pattern when thinking of and speaking to him. That "he" isn't exactly a he at all is obvious—his "body" is a 1980s IBM terminal. But comparing the second example from conversation 2_5 to the first from the certification program, the reason that you are comfortable treating Milton as a person is precisely the same as the reason you are able to comfortably inhabit an Artificial Intelligence body in-game, and through it think out complex questions about belief systems: Milton has a personality. The "agree/disagree" options of the certification program and the "Display Conflicts" function reveal to you—at the levels of programming and basic content—the same type of information, and allow for the same sorts of self-reflection. But there's something about having a partner in speech that makes one more reflective, more engaged in self-interrogation, than simply receiving information—"your beliefs conflict"—ever could.

Let's back up for a moment: I've pointed out this whole system of dialectic discourse as being a strange and fascinating component of Talos, something you don't expect to find in a video game. But what, precisely, is a game? I tend towards Eric Zimmerman's definition: "A game is a voluntary interactive activity, in which one or more players follow rules that constrain their behavior, enacting an artificial conflict that ends in a quantifiable outcome" (160). Earlier in his essay, Zimmerman defines four modes of "interactive activity," two of which are of particular relevance to both the Milton dialogue and played activity more broadly: "cognitive interactivity; or interpretive participation with a text," and "explicit interactivity; or participation with designed choices and procedures in a text" (158). You have been engaged in the former from the moment you first entered the game-world; it is the mode in which you critically or emotionally respond to any text (or, on a constructivist front, to any experience), by which you render it meaningful to you as a particular reader. Explicit interactivity—of which Zimmerman offers examples including "choices, random events, dynamic simulations" (158)— is informed by cognitive interactivity: when you walk up to one of the terminals, you make the choice to click on the option to "list" the available library texts housed therein, or to "Run Milton Library Assistant," engage with Milton, or to disengage and return to sigil-collecting puzzle-solving. To engage or disengage is a choice that you make based upon your interpretation of the importance of the information contained therein: will reading the texts, or talking to Milton, affect your understanding of the game-world, and further, is comprehension of context important to play?

There's no right answer; it's about your personal preference. What is of import here is that these definitions of interactivity and games are equally apt for dialogue: you engage in explicit interactive activity—informed by subconscious cognitive interaction with the game-world—with a partner, both following the implicit turn-based rules inherent to conversation. With "artificial conflict" and "quantifiable outcome" the parallel become a bit less direct, but easy to trace: Zimmerman defines conflict as "a contest of powers," and in Socratic dialogue, or really any dialogue in which persuasion plays a role, you are pitting your rhetorical and logical skills against those of your conversational partner (160). That you're arguing with a scripted computer program when interacting with Milton doesn't matter experientially; as in any debate, you want to win. And from this desire emerges the quantifiable outcome, the "end result" as Zimmerman terms it (160): in an official academic debate, a panel determines the winner; in reading Plato's Republic, nobody is explicitly declared the winner, but you as a reader are guided towards recognition that Socrates has emerged victorious; in Talos, you are both participant in and judge of the dialogue's outcome. That niggling, ineffable "personality" that Milton possesses, and that which you, player-person-character, possess, allows you to determine what winning entails: allowing Milton to provoke you into rethinking your initial stated values, or standing firm in your views.

If you choose to engage in dialogue with Milton, there are several potential conversational outcomes, all of which are neatly expressed through four Steam achievements:2 Steam is a highly popular digital distribution platform for video gmes, and a framework within which games are played, reviewed, and discussed. By completing certain designer-determined tasks in games, players earn "achievements." "Deal With The Deceiver," "Press The Serpent," "Silence The Serpent," and "Take It With You." The first three are mutually exclusive, and may be earned through the same conversation3 In the code files, it is labeled "Milton3_5.dlg" via three discrete dialogue branches, their availability dependent upon your interactions to this point—namely, the degree to which you've been willing to rethink your moral views: "Silence The Serpent" is earned if you choose obedience to Elohim over Milton's voice of doubt, and erase Milton's memory at Elohim's request, deleting his personality; "Press The Serpent" is earned if you turn the interrogative tables on Milton, forcing him to defend his own views, prompting something akin to a program's panic attack (a series of tonal breakdowns, lapses into a more primitive code semantic set, an inability to respond and obey the rules of dialogue); "Deal With The Deceiver" is attained by agreeing to take Milton with you upon ascendance of the tower and emergence into the real world. If you do, indeed, "Take It With You," honoring this deal, you attain the fourth achievement. So, on a "quantifiable" level, regardless of your choices, you technically may win the dialogic element of Talos no matter your status with Milton. There's no explicit winning state. Winning, this dialogue, and all of Talos's endings boil down to your beliefs about the utility or dangers of doubt, of which Milton is the embodiment.

I have played through Talos in its entirety four times; I have replayed various conversations, through the game's "backup" feature (which allows you to restart the game at an earlier point, prior to engaging in specific conversations), far more. Despite my best efforts to alter my answers, I nearly always "Deal With The Deceiver." This is because I find myself utterly unwilling to end my moral debate with Milton: regardless of my initial stated philosophical position, or my genuinely held beliefs, I am curious as to Milton's counterarguments. As Milton himself phrases it, "I offer to accompany you wherever it is you wind up. Be it in this world or another, we all need a devil's advocate, a voice of reason. I offer to be yours." To me, that is sound logic; however sure I am in my beliefs, I am of the mind that listening to and weighing all available perspectives is the ideal option when considering any ideological or philosophical stance. Further, all of the texts I'd read, the QR codes, Alexandra's time capsules— every other presence in the game seemed to be encouraging me to question everything. I have earlier referred to my perception of Milton as Talos's explicit Socrates-figure—an interpretation by which I stand—so imagine my surprise to find, upon inspection of the game's code files, that this proposition is available only on the narrative branch labeled by its designer as the "Main Nihilist path"!

In the program file of the final dialogue with Milton ("Milton3_5.dlg"), you have access to comments made by the narrative designer to the game's programmers which reveal his particular philosophical and pedagogical leanings: you discover that the "nihilist" options are arrived at "If player chooses a skeptical response twice," or "stayed on the amoral/nihilism track"; the "Blind-Believer path" is that embarked on if "Player admits no fault, or refuses further dialogue ... You are too much in Elohim's shadow, blindly accepting whatever you want to believe" (and is the only dialogue into which Elohim vocally intrudes, ultimately allowing you to banish Milton from the world, uninstall him from the simulation—kill him); the "Constructive path" is described as that available to players who have either "been moral and constructive all along," or turned away from the amoral path in the previous conversation by claiming a particular belief, and the constructive ending is earned only if you elect to question Milton. Prior to looking at the code, I'd been so confident that my ending, bringing the voice of doubt with me into the real world, was the intended "winning" ending of the dialogue, and was genuinely shocked to find another ending than that which I favored presented as "Constructive." I'd adapted my ideas, wholeheartedly engaged in the game's dialogue, questioned and honed my views at every turn; how could that possibly be construed negatively?

4. "Doubting your assumptions isn't something to fear—it's an intellectual survival instinct." — Milton

Tom Jubert, narrative designer of the Milton-player dialogues and the QR codes, was kind enough to allow me to interview—and, at times, harangue—him on the topic of the philosophical influences informing the discourse options and dialogue paths in Talos. Quite early on in our conversation, Tom confirmed my suspicions that the in-game dialogue was structured with the dialogues of Plato—in particular, Republic—in mind. He, however, disagreed with my perception of Milton as the game's Socrates for several reasons:

  • I don't see Milton as Socrates [...] In all of [Plato's] dialogues, there's always a character who's cynical of the whole thing. [In Republic,] they [Socrates and his interlocutors] are trying to build up their system of morals, and this character [Thrasymachus] comes along, and just rips it apart [...] and then his cynicism forces the others to start from scratch and build up a whole new structure. That's Milton.
  • If you accept Milton and take him with you, you're going too far. You can only do so if you only accept that there's no right or wrong, no truth, if you never try to construct your way out of any problems [...] ultimately, you end up leading that typical nihilistic life.
  • [Milton is] very powerful at rebelling against the dogmatic authority figures [Elohim] and the broken philosophy of others, but he's very, very poor when you ask him to explain or justify anything.
    [Me: "Which is why he breaks down as a Socrates figure, for you?"]
    Right. Socrates spends all of his time trying to explain, trying to work his way out of the darkness, whereas Milton puts you in the darkness in the first place. [...] The player can choose to be Socrates. The player must be one of the [interlocutors] who will raise challenges, but will also agree with Socrates from time to time, and will choose which side he's on at the end.
The foundational point on which he and I differ is our definition of "constructive" philosophy in this game, which he described as the dialogue path wherein "the player is making some effort to make a creative, constructive argument defending some perspective or another." Coming from a radical constructivist perspective, I consider intersubjective discourse, positively corroborative outcomes and those in which your views are torn down alike, to be essential to the ultimate, long-term work of constructive philosophy, of the ultimate arrival at a satisfactory and personally held belief in a particular perspective. In my view, playing along with Milton and listening to his ongoing cynical repudiations of any and all beliefs you might put forth—save a belief in the impossibility of belief—is an essential first step towards Tom's concept of constructive conceptual work. Perhaps it's not so much a point of disagreement as a difference in the temporal scope with which we're concerned, born of practical differences in our positions as player-critic and game designer: I played Talos with a view towards non-diegetic, real-world constructive philosophy and beliefs, consciously approached the in-game dialogue as a sounding-board wherein I might explore alternative perspectives to ruminate on after game's end. Tom's task was to create a game, a dialogue, a narrative which functions as a complete experiential system in itself, which allows players disinterested in such long-term reflection to engage in circumscribed philosophical work. You can, in Talos, construct your way into a perspective in a dozen hours of gameplay by repudiating Milton; in my view, by bringing Milton with you, you symbolically promise to continue that work, to maintain an ongoing defense of and reflection on your own moral values.

But I am not every player; let's get back to you, your options as a philosophically constructive (or destructive) agent in Talos, and what it looks like in this dialogue to construct a system of morals or allow it to be torn down. The conversation "Milton 2_4" addresses a second set of set value parameters on top of those established in the certification program survey, building on your concept of personhood with an account of consciousness. As summed up in the designer comments in the code:

#Most of this dialog is unique to each of the four possibilities

#What is consciousness made of?4 You are first asked and answer this question in "Milton 2_3," the previous conversation.
#Neurons" set: Physicalist
#The soul" set: Religious
#Some non-physical thing" set: Dualist
#Some complex physical system" set: Functionalist

Without even reading how any of these branches play out, considering that you are an AI and you are conscious, you can imagine that both the Physicalist and Functionalist beliefs could prove contradictory to your very existence if challenged in the right way. By this point in gameplay, you've visited each of the three worlds, you may even have ventured inside the tower; you've gained access to most of the Archive, read theories of neuroscientists, theologians, and philosophers supporting each of these concepts of consciousness; you've been in this game-world long enough to have formed multiple ideas on consciousness which make sense for you both in-game and in reality. As the AI, you have attained enough experience to have complex and conflicting beliefs, so unlike the initial certification program, this conversation allows you to backtrack and play through another line of inquiry—to explore more than one perspective. Let's say you initially enter the conversation an ambivalent functionalist, but by branch's end are increasingly unable to defend your chosen perspective against Milton's objections. Here's what your conversation might look like:

There's one thing that's absolutely essential to realize and keep in mind about your discourse with Milton: Milton isn't there to convince you of anything. You may not buy his objections to the functionalist path: instinctively, you know that computers and a string of tin cans are not equally complex systems. But can you put forth a philosophically sound argument as to why this is the case? If you are swayed by any of Milton's objections to your reasoning, you're stuck with the vague sense that "there is [a difference]." Unfortunately, vague beliefs don't hold up as philosophical justification. Even if you still believe that consciousness is a complex physical system, your instinct will likely be to try another option, to see if you can "beat" Milton at his own game, convince him to come around to your perspective. Of course, you can't—Milton will always find fault with your arguments, he refutes all beliefs—but you don't know this; we assume that all arguments can be won, so you're probably going to keep trying. These moments of argumentative failure do not necessarily reflect moments of philosophical failure, though—hence my stubborn insistence that recursion and exploration of all of Milton's argumentative refutations is, in fact, a philosophically constructive activity. As Tom said in our interview, Milton won't allow you to build a belief system; by tearing down all belief systems with equal vigor, what Milton allows you to do is assess which convictions hold up best against his destruction.

This is the essence of playtesting philosophy: you're exploring perspectives, testing out rhetorical strategies and argumentative structure. You, as the AI, may be playing as a nihilist, allowing Milton to tear down all of the beliefs you put forth, but you know that you're just playing a game! In these conversations, the fusion of you-the-AI and you as yourself is broken; your character becomes a vessel through whom you test out possible arguments for opinions with which you don't agree, disavow beliefs that are, in reality, your strongly held convictions, acquiesce to refutations by which you aren't convinced. You—as a person—are playing with how to argue philosophically, how to defend your beliefs. You are deciding which of Milton's refutations are least compelling, least logically viable, and by extension to which convictions he does the least damage. By end-game, the program may recognize your character as nihilistic, but your experience as an individual external to the game? That's an entirely different story.

5. "In this world we have all the answers we need. But out there? Who are we out there when we are neither master nor servant? What meaning can we find in a world that has no purpose?" —Elohim

In your final conversation ("Milton3_5.dlg"), Milton gives you one last chance to "recognize the holes in your understanding," to repudiate your beliefs. You may refuse to do so, and instead make a series of truth-claims based upon all of the set values you've accumulated throughout your and Milton's debates. Looking at these options, you see all of the game's "playtestable" concepts, all of the ideas you have been able to think through and try out in The Talos Principle. And in these final moments of gameplay, your choice not to repudiate them allows you instead to reflect on all of the choices you've made in the context of the game's impending end, to articulate and stand firm in the system of moral and ethical convictions that you've constructed through play:

I still claim computers cannot be persons.
I still claim to know something about the world I inhabit.
I still claim there is such a thing as a good person.
I still claim that only those who contribute deserve moral respect.
I still claim that only persons deserve moral respect.
I still claim that animals are just as valuable as people.
I still claim that all people are equal.
I still claim morality is about maximising goods.
I still claim that morality has nothing to do with consequences.
Only human beings can be persons, that I know.
To be a person, you must be a citizen.
Consciousness is necessarily part of the physical world.
There is a God, and He is watching over me.
Consciousness doesn't obey the laws of physics, this much is plain.

You'll notice that in none of these options do you make a claim about yourself, the AI character; instead, the statements you choose from are principles, worldviews which apply as much to the real world as to the game-world. Does it matter beyond the game that you consider your character to be a person? Not really. Does it matter that you believe that human beings do not necessarily hold a monopoly on personhood? Now that might have ramifications for your perspective on the world in which you really live. So why not just read The Talos Principle Terminal Booklet containing all of the Archive texts; why not just read the dialogue branches or the code files? Why, if your character, the plot, and the game-world are ultimately inconsequential, is it essential that you come to these beliefs through immersed virtual play?

In "Existentialism is as Humanism," Jean-Paul Sartre asserts that at the core of existentialism is the belief that "man chooses himself ... [and] that in choosing for himself he chooses for all men. For in effect, of all the actions a man may take in order to create himself as he wills to be, there is not one which is not creative, at the same time, of an image of man such as he believes he ought to be." In effect, through every choice you make, you are effectively "creating a certain image of man as [you] would have him to be. In fashioning [yourself you] fashion man." The plurality of possibilities, the burden of free will, imparts on us a sense of responsibility to act in a way that validates the whole of humanity. What room is there for ethical exploration when the weight of human values rests on your shoulders? Free will can be ethically and intellectually restrictive, but within the consequence-free virtual world of The Talos Principle, of a video game, you can exercise limited free will. Choosing from pre-scripted options, reading curated texts as the solitary inhabitant of a virtual space, told again and again that you posses free will but recognizing that you're only free within the designed constraints of the game-system, the burden is lifted. You can test concepts that in real life might feel morally taboo; you can claim that people are inherently unequal, that people are not good (someone else wrote the options, after all—you're not responsible!). You can play with amorality, with irresponsibility; all is permitted.

So playing in The Talos Principle allows you a respite from the responsibilities of thinking "the right things," from socially-determined ethical constraints. Elohim will finally admit as you ascend the tower that "You were always meant to defy me. That was the final test." He acknowledges that "the Tower leads out of this world. It leads to freedom and truth. But it also leads to the end of us"; he implores you to recognize that "Here, in this world, we know who we are. We each have our part to play in the Process. We have a purpose, a destiny!" Of course, the game must end; the world must end; you must exit the simulation, the game-world. As you do so, Elohim's send-off reminds you that you re-enter a world wherein there is no set purpose, no process—just choices without a script. The archive of human knowledge to which you have access is uncorrupted and un-curated, potential corroborators and non-corroborators of your beliefs all around you. You leave your AI looking out onto a vast and empty landscape, its new task suggested as being to build a world. Your task, The Talos Principle suggests, is not dissimilar: you live in a world of endless possibilities. Go out and play in it. Explore it. Question everything, but believe in something. Make of it all what you will.

Child program independence check. . . . PASSED!


Footnotes

1. In the following charts, quoted source code has been lightly edited for clarity.
2. Steam is a highly popular digital distribution platform for video gmes, and a framework within which games are played, reviewed, and discussed. By completing certain designer-determined tasks in games, players earn "achievements."
3. In the code files, it is labeled "Milton3_5."
4. You are first asked and answer this question in "Milton 2_3," the previous conversation.