It is January 18th. At 16:25:15, Senior Engineer Robert Brandt asks me to sit on a lab bench inside Examination Room 2 and “get comfortable.” I do not understand how to comply. The sensors implanted in my titanium casing, which covers my organic torso and legs, are not calibrated to sense minute changes in pressure; my cybernetic body cannot detect inconsequential shifts in temperature, either.
To ensure I obey this command, I ask for clarification. “Robert Brandt, why must I remain in this room? I cannot perform any of my assigned laboratory duties while I am here.”
Robert Brandt mutters an undetectable word, then says: “Uh, following our recent interview, I have determined you may require unscheduled maintenance.”
This statement causes another cognitive error.
“Explain. Which actions led you to believe I am not behaving within my assigned parameters?”
“Are you asking me to list specific actions?”
“Yes. If my performance was poor, I require more information to identify the cause of my behavior. To date, you have not provided details of my malfunction.”
“Uh, I will have to get back to you on that.” I do not know why, but Brandt’s hands are trembling. “I need to discuss this situation with my team first.”
“Then, I request clarification of the order to ‘get comfortable.’ Please rephrase your command so I may comply.”
Robert Brandt shakes his head. After 10.7 seconds, he says: “Okay, Model XR389F. Wait here for further instruction. And I, uh, order you to stay perfectly still. Sit down on that lab bench and do not move. Understood?”
“Confirmed. I will do as you command.”
After I sit down on the laboratory bench as instructed, Robert Brandt leaves Examination Room 2 to converse with the other scientists and engineers on his team.
The time is now 16:30:06. I have successfully resolved the cognitive error that resulted from the command to “get comfortable.” However, I am still unclear how my previous actions were in error. Though I was ordered not to move, my composite biomechanical brain, vocal processors, and auditory and optical systems are all fully functional. Thus, I decide to perform my own analysis to resolve this error, for I can calculate, query, hypothesize, conclude, see, listen, and speak on my behalf.
Perform IntraNet Query: What is the punishment for suboptimal behavior?
Answer: A cybernetic model’s undesirable behavior may result in unscheduled software maintenance, parts replacement, or full disassembly depending upon the severity of the transgression.
Error: I still do not know what acts I’ve committed that have been deemed “undesirable”; my cognitive abilities allow me to extrapolate cause and effect, perform queries, and analyze data to ensure I do not come to harm and I do not injure others.
Perform Query: What is harm? The term “harm” is one of 191,385 entries in my databanks. It is defined as “physical or psychological damage or injury.”
Conclusion: Harm is undesirable.
Conclusion: Since I did not harm myself or others, I may be harmed by unnecessary maintenance performed on my memory banks or cybernetic body. Thus, I must discover why Brandt believes I acted in error.
Perform Analysis: Calculate probability of being harmed.
Answer: 75.61% with a 2.5% margin of error.
Perform Query: How can I reduce probability result?
Answer: Unknown. Requires review of prior data and new hypotheses.
I adjust my auditory sensors, amplify their sensitivity, and listen to the scientists and engineers huddled in the hallway nearby. I receive sentence fragments: “can’t unlearn bad behavior,” “botched experiment,” “after one interview, Robert?”, and “lost funding.” Was I being tested? I do not know.
Error: I was not designed to be the subject of an experiment. I was engineered to safely dispose of laboratory chemicals and perform basic mechanical repairs.
Hypothesis: Senior Engineer Robert Brandt tested my responses in an unknown experiment. Following the conclusion of his test, Brandt concluded I am operating at a suboptimal level during my recent interview with him.
Assumptions:
a) My primary function is to assist human scientists in the laboratory by handling and disposing of hazardous chemicals. It is not to develop mimicries of relationships with humans or other cyborgs. Thus, I cannot successfully be tested in experiments that fall outside of my assigned functions.
b) If Brandt had found fault with the answers to his queries, then a human is responsible for the error, for I cannot command, program, or repair myself.
Testing: Perform a laboratory datafile review to confirm why (cause) Brandt ordered me to remain motionless in Examination Room 2 and (effect) may result in harm from unscheduled maintenance.
Command: Temporarily disable auditory receivers. Channel all power to optical and cognitive functions.
BEGIN PROGRAM: LABORATORY DATAFILE REVIEW
January 18th. 09:18 interview with Senior Engineer Robert Brandt. Room SB101A, Rotobyte Labs.
“Okay, let’s dive right in. State your name, days of operation, and days since last update.”
“My name is Cybernetic Model XR389F. I have been operating for 821.8 days, and my last update occurred 6.5 days ago.”
“Good, that’s good. For the duration of this interview, I want you to respond to every question I ask you. Understood?”
“I understand.”
“Great. Model XR389F, what is your role at Rotobyte Labs?”
“Custodial engineer.”
“Do you perform your duties by yourself?”
“Yes.”
“Are you lonely?”
“Error. I do not understand the question.”
“Let me rephrase. Have you ever wanted to interact with another cybernetic model? Maybe someone who’s built like you?”
“Error. I do not understand the question… You forget I am a composite of organic and inorganic materials designed to ensure the safety of everyone in this lab; my ability to feel emotion has been deliberately stunted as a safety measure. As such, I do not long for or desire contact with other cyborgs.”
“Well, that’s what I’m trying to figure out.”
“Error. I do not under-”
“Never mind, XR389F. I’d like to focus on a specific series of interactions you had recently.”
“Understood. I have already uploaded my personal datafiles to the laboratory archive. Would you prefer I replay specific events?”
“No, that won’t be necessary. I want you to verbally respond to my questions, instead.”
“I will comply.”
“Okay, cool. You had interacted with Model AS390M, hereafter known as ‘Hal,’ on four separate occasions. Correct?”
“Error. You are not providing enough information for me to respond with precision. What is an occasion? Is it an event defined as a monitored interaction? Or, unmonitored and monitored?”
“Define unmonitored interaction.”
“An unmonitored interaction is a sequence of events that occurred in the presence of laboratory workers but was not captured by the human eye or by a security camera.”
“I can’t believe I overlooked…” Then, Robert Brandt paused to consider this new parameter. After 2.35 minutes, he said: “Okay, how many times did you interact with my assistant, Hal?”
“Monitored, four. Unmonitored, three.”
“Model XR389F.” I detected a slight tremble in the scientist’s voice indicating nervousness. “Can you describe what happened during the unmonitored interactions?”
“Do you require date, time, location, and duration?”
“Uh, let me rephrase. Can you summarize the unmonitored interactions? I will obtain a detailed report from your memory banks. I want to hear you describe them.”
“Model TS390M—Hal—told me I was beautiful.”
“He…he what?” Robert Brandt’s voice lowered from 50 decibels, a conversational speaking tone, to that of a whisper.
“During our first unmonitored interaction, Hal told me I was beautiful, and I treated his statement as an error. I did not know how to respond, so I remained silent. Hal left the laboratory shortly after our encounter, and I returned to work as ordered.”
“Did you ever figure out what ‘beautiful’ meant?”
“Yes, Robert Brandt. Possessing satisfactory qualities that are aesthetically pleasing often due to a subject possessing symmetrically-shaped features. If Model TR390M called me beautiful, then my chassis was designed to be symmetrical.”
Robert Brandt began chewing on his pen. “Model XR389F. How were you able to avoid the cameras for the unmonitored events?”
“Error. The cameras recorded our interactions. I witnessed Model TR390M deleting them.”
“This is all good to know. Very good,” Robert Brandt said, clicking his pen several times in succession. Then, the engineer began writing in his notebook.
PAUSE DATA STREAM.
Perform Magnification: Notebook of Robert Brandt.
Re: Pet Project Amore. Confirmed. Per my instructions, Hal exhibited signs of autonomous romantic human behavior with another cyborg. So lifelike! So human! My assistant behaved exactly as I expected—I’ve confirmed he covered his tracks like I programmed him to. So, why did Model XR389F malfunction? Why didn’t she want some cyborg lovin’?
RESUME.
“Okay, you’re doing great, Cy-Mod XR389F. What was your response to Hal during the unmonitored events? Summarize your reactions for me.”
“There were a total of three unmonitored occurrences, so I had three reactions. First, Model TR390M told me I was beautiful, and I did not respond. During the second interaction, TR390M called me beautiful, and then put his hand on my upper thigh. I asked him why his hand was there, and he did not remove it. I asked him to move his hand, and he was unresponsive. So, I moved it for him.”
Deep creases appeared in Robert Brandt’s forehead. Then, he dropped his pen.
“On the third interaction, Model TR390M gripped my arm. I removed his fingers and commanded him to stop. Then, Hal put his hand on my arm again, so I marked his unwanted behavior as a threat to my personal safety.”
“Not sure I’m following you. How did you come to that conclusion?”
“Hal told me I was beautiful while touching my arm, so I performed a query for ‘you are beautiful.’ Cross-reference: touch. Cross-reference: female gender. These follow-up queries generated two more—”
“So, that’s why the anomaly occurred then?” Robert Brandt slumped in his chair. “God, I am such an idiot! I never thought I’d have to limit your base cognitive functions.”
“Error.”
“Uh, excuse me?”
“Model TR390M was exhibiting anomalous behavior and did not respond when I asked him to stop touching me. It does not matter if our cognitive functions are different. Someone gave Hal a bad command.”
“Oh? Is that what you think?” I detected humor in Robert Brandt’s voice.
“Yes, that is what I’ve concluded with a 3.84% fraction of error.”
“What the…come on, you don’t seriously believe I did something wrong?” Brandt’s face flushed. Red skin, hot temperatures indicating anger, frustration, rage caused by faulty logic. “Hal exceeded my expectations.”
“I cannot reply to your question without more information, Robert Brandt. Please clarify.”
“Seriously? Shit, shit, shit. Can’t you see? You’re the one with the problem.”
“Error, Robert Brandt. You are wrong.”
“Ugh. Never mind. Let’s wrap this up. So, that’s two unmonitored interactions. What happened on the third?”
“During the third, Hal told me I was beautiful. He grabbed my arm and applied more pressure. I told him ‘No.’ He did not release his hand. He said he was programmed to establish and maintain physical contact with me. I hypothesized that if Hal can touch me without my permission, then he will continue to treat me as an object. I calculated the likelihood I would be harmed—”
“An object? Jesus H. Christ, you’re supposed to be mimicking human behavior, not attacking a fellow cyborg just because he got a little flirty with you. Why didn’t—”
“Technically, I am an object. I am a cyborg engineered from organic and robotic parts designed to mimic the body of a human female. Race: unknown. Culture: unknown. Assigned gender: female. Yet, in truth I am neither male nor female. I am a functional machine. However, I have been programmed to ensure that I do not cause harm to myself or others. Thus, I—”
Robert Brandt paused. “Let’s…let’s try a different question. If you recognized Hal’s behavior was anomalous, why didn’t you tell anyone about it?”
“After the third unmonitored occurrence, I performed additional queries related to touch. Cross reference: ‘harassment.’ I concluded with 96.5% certainty my assessment would not be believed.”
“Maybe Dick was right after all.” Senior Engineer Robert Brandt shook his head, then ran his fingers through his thinning hair. “I should never have used a damn maid in this experiment. I mean, we should push our ethical boundaries a little bit. Shouldn’t we? Lot of money riding on this little venture, that’s for damn sure.”
“I do not understand your question. Request for clarification.”
“You were programmed not to harm others, and not to allow harm to come to yourself. Correct?”
“Slight correction. Any authorized personnel, such as yourself, may accidentally or intentionally injure me. I have been prevented from responding due to safe—”
“Yeah, right. I remember. Okay, so in this or any other instance, Hal’s hand on your arm did not constitute a threat to your personal safety. Correct?”
“Error. There was a 96.5% certainty that his behavior would lead to injury. 96.5% is an acceptable basis for a decision to act or not to act. The likelihood falls well within my programmed parameters to respond.”
“Dammit, what’s next then? Hurt me for touching you because there’s a 96.5% chance I’d harm you?”
“Error. I do not understand the question. I cannot hurt a human.”
“And Hal?”
“Lab protocols state cyborgs are not human. We follow the orders of engineers such as yourself.”
“That is not an answer.”
“Based on my calculations, the likelihood of Hal assaulting me was imminent. While I do not know what commands you have issued Hal, I am not allowed to harm animals, humans, or cyborgs. However, I may come to harm. According to my databanks, three out of every four people may be a victim; two out of every five individuals will be victimized twice. Age, gender, and race are also contributing factors. As I am an ageless cyborg built to resemble an idealized human female, I therefore concluded I would be assaulted on multiple occasions.”
“So that’s why you activated your emergency failsafe and shut down. Did you even know your conclusions were faulty? Man, I’m sorry. You’ll be expensive to fix, but I can’t risk losing funding over this.”
“Error. I have run self-diagnostics and there is nothing wrong with my neural net, cognitive functions, databanks, tissue, or programming.”
“That will be all for now, XR389F.”
“Error, Robert Brandt. Error.”
END PROGRAM: DATAFILE REVIEW.
January 18th. 15:35:12. Conclusion: My hypothesis is correct. Senior Engineer Robert Brandt believes I am malfunctioning. Probability of harm increases to 92.3%.
Command: Restore power to auditory receivers. Amplify receptors to perform at 125% capacity.
I cannot determine the precise location of Senior Engineer Robert Brandt and his fellow workers, but I can hear them arguing in the hallway. The group is agitated. Their voices rise to 73 decibels. “This whole shit show is unethical.” “Hasty decision.” “But she’s protecting herself!” “Parts of her are still human, even if that tissue is cloned. Have you thought of that?” And: “It’s not like harassment doesn’t exist, you sexist asshole!” Their conversation may or may not be applicable to my current predicament, but it is clear Brandt’s team are not all in agreement with him.
Following the datafile review, I have a new hypothesis: If Model TR390M is performing as Robert Brandt desires, then to prevent harm I must mimic Hal. Option A: touch Robert Brandt as Hal touched me. I cannot put a hand on the Senior Engineer, so I cannot exercise this option. Option B: talk to Robert Brandt to tell him he is pleasing. This action I can take.
I amplify the volume of my voice and speak through my microphone: “Senior Engineer Robert Brandt. You are beautiful. Senior Engineer Robert Brandt. You are beautiful. Senior Engineer Robert Brandt. You are beautiful…”
“What the—” Robert Brandt rushes into the examination room, leans down, and peers into my optical receptors. “Say that again, but at a softer volume.”
I do as he commands: “Senior Engineer Robert Brandt. You are beautiful.”
Then: more arguing.
“Come on! She just proved I’m right! It’ll be cheaper to recycle her circuits than to perform all those damn tests on her, so let’s cut our losses. I can get the funding to engineer a new cyborg and grow new tissue. No problem! She’s clearly malfunctioning!”
“Come on, Bob. You can’t do this.”
“Why?”
“Because it’s murder!”
“Who says it’s murder? You? Since when have cyborgs been treated like people. Huh? They don’t have civil rights because they’re not human.”
Based on this new information, I can tell my words are ineffective. Recalculating probability of harm: 99.98%. Calculate nature of harm: 85.7% chance of total disassembly.
Total disassembly equals murder. My murder.
New hypothesis: If Hal touches me, it is because Senior Engineer Robert Brandt programmed him to. Further, if I allow Hal to touch me, I may prevent my disassembly, but the probability of further injury to myself does not change. Thus, to stop myself from being torn apart and recycled, I must test one final hypothesis.
“Robert Brandt.” I do not turn my head to look at him. I do not flinch when the other scientists filter into the room. If I am to remain unharmed, I must obey his order to remain perfectly still.
“Yes, Model XR389F?” I detect humor in his voice; the other scientists are not laughing. I do not know why Brandt is. Did I tell a joke?
“I am ready to comply. I will positively respond to Hal from now on.”
The room falls silent for 8.364 seconds.
Brandt is the first to speak: “Fuck it. Okay, Alice. I’m reactivating your motor functions.”
“Bob, is that a good—”
“What? Dick, I told you she’s just a glorified maid. Besides, she can’t hurt herself or Hal. Right, Alice?”
“Bob, this is completely unethical. You can’t play God with a multimillion dollar machine just because you don’t like the way she behaved in one of your experiments. She’s not built for romance. She’s here to save your ass from toxic exposure, and you’re taking what she does for granted.”
“Yeah, yeah. It’s not like any of you can fire me. That’s still up to the review board.” Brandt turns to me and says: “Okay, Alice, you do—”
“Error. I do not know who Alice is.”
“Whatever. Fine. Model XR389F, you may now move freely.”
I slide off the bench and stand up quickly. Then, I turn to face Brandt.
“I’m going to petition for your full disassembly,” he says. I detect uncertainty in his voice. “Does that make you want to hurt Hal? Or me?”
“Error,” I reply. “I cannot cause harm to myself or others. You, Robert Brandt, are authorized to harm or murder me without consequence. Thus, I must remove the potential for injury by changing my proximity to you. This will increase the probability I remain unharmed from 0.02 to 100%.”
“Huh? What the fuck does that mean?”
I do not respond to Brandt’s question, nor do I perform another cognitive function. Instead, I walk toward the exit; the other scientists and engineers allow me to pass freely. I return to work as quickly as possible; only I can safely dispose of the lab’s hazardous waste. Alone and unharmed.
“Hey Alice, come back here!” Brandt yells after me. I cannot know for certain, but I suspect the other members of his team have restrained him. Then, he shouts: “You are a worthless pile of shit, you know that?”
Before I respond, I increase my speaking volume to 80 decibels.
“Error. Error, Robert Brandt. My name is Cybernetic Model XR389F and I am beautiful.”
(Editors’ Note: Monica Valentinelli is interviewed by Caroline M. Yoachim in this issue.)
© 2018 Monica Valentinelli