SS
The Ship of the Dead and the Girl at the Edge of the World Thumbnail
The Ship of the Dead and the Girl at the Edge of the World

Chapter 2 - Chapter 2: R is for robots

Chapter 2: R is for robots

 

In response to my squeezed out question, or rather confirmation, Riza took a breath and then gave a small nod.

 

“Ira has explained that there are no human survivors left, so it’s highly unlikely that Miss Whitford is still alive today.”

 

“…Then…”

 

At that moment, I didn’t care about any of this.

 

Because I was reminded of how big her existence was to me, who was only a “correspondent”. I knew that if I broke my knee here and now, I would never be able to get up again.

 

So I knew that I should not break my knee there.

 

Still, I was barely able to support myself with what little sense of mission I had left.

 

“…I’m going to go into Astoria. Huh, I was really looking forward to meeting Miss Whitford.”

 

I said it a little desperately and jokingly, but it was indeed inappropriate, and Riza didn’t even smile at me. It wasn’t funny to me at all, either.

 

She gave a small, expressionless shake of her head.

 

“I don’t recommend it. The safety is questionable.”

 

“But if there are no humans left in Astoria, we will have to investigate and send a report to Earth. Isn’t that right?”

 

“That’s true, but it’s not a duty worth risking.”

 

Riza’s reply was grudging.

 

We have a mission.

 

If we stop in Astoria, we have to report back, and either way, one of the humans has to send an investigation report to Earth. And if I’m the only human alive here, I’ll have to make that report. It is, without a doubt, a duty that is imposed on us.

 

But even so, it is only to the extent possible, not to the extent impossible to ensure our safety.

 

And our Riza is a robot who knows safety better than anyone else.

 

“Riza, I’ll be fine. The Three Principles apply to Ira, too, you know.”

 

“The three principles aren’t always perfect.”

 

I recite the famous three principles of robotics in my head.

 

The three principles of robotics, devised on a far-off ancient Earth, are the constraints for the safe operation of robots with artificial intelligence with personalities.

 

Says—

 

Section 1: Robots must not harm humans and must not ignore the possibility that humans may harm them.

 

SECTION 2: The robot must obey all human commands as long as they do not contradict SECTION 1.

 

SECTION 3. The robot must protect itself to the extent that it does not contradict paragraphs 1 and 2.

 

This principle has been known to cause various problems in actual operation, but even today, it continues to be applied to all robots, with various expanded interpretations.

 

Of course, it is also applied to Riza, who is right in front of me.

 

The biggest problem is that this principle is often self-contradictory in practice, and the number of cases that need to be taken into account explodes with it. Then the robot will be stuck in a roundabout way of thinking and will not be able to do anything, and in the worst case, its delicate quantum brain will destroy itself.

 

Therefore, in reality, the implementation has given up some strict application.

 

This allows the robot to avoid self-destruction due to logical inconsistency, but at the cost of not necessarily guaranteeing safety when complex conditions are involved.

 

Another problem is that this principle is based on the basic premise that humans exist.

 

In other words, if humans do not exist, the application of the first and second principles becomes extremely ambiguous, and it becomes virtually impossible to predict what the robot will do.

 

Several other problems have been pointed out, but they are still considered indispensable, and humanity has been using these three principles deceptively for years.

 

“It’s not like Ira’s ‘gone crazy’, is it?”

 

“In the sense that she is somehow acting in accordance with her own norms, it seems likely that she is not crazy. However, it’s not clear what her initial behavior is based on.”

 

“However, if the three principles are functioning to some extent, I don’t think it’s too likely that they will do me any definite harm. After all, I’m the only human here. In other words, she only needs to think about my safety.”

 

“I’m actually a little skeptical about the ‘only human’ point.

 

“Does that mean there might be survivors?”

 

I asked Riza, biting back a little when I heard that I might not be the only one.

 

If there were other survivors besides me, I couldn’t deny the possibility that the first principle would be violated through a contradiction of the three principles and I would be harmed. In other words, in that case, my safety is not necessarily guaranteed.

 

—On the other hand, if there were other survivors besides me, I would have an even stronger incentive to enter Astoria.

 

After all, I’m on the brink of being all alone in this vast universe. I’m a normal human being, which means I’m not a superhuman who doesn’t feel lonely at all.

 

Riza, however, shook her head at my question.

 

“That’s not it. It seems that Ira has created a “human substitute” using biological parts, and is using it as his own master instead of a human.”

 

“A robot making a human…? No… But that’s…”

 

I was puzzled by Riza’s explanation.

 

Actually, it’s possible to “make a human” with current technology.

 

The technology to create an entire human being from a single living cell, the so-called cloning technology, has been established as a matter of course. If we want to do it technically, we can do it anyway we want.

 

However, I wondered if such a thing would be possible for a robot constrained by the three principles.

 

Actually, the three principles do not define the most important question, “What is a human being?” Therefore, it is theoretically possible for a robot to create, say, an artificial human being and claim and recognize it as a human being.

 

However, that is not enough.

 

If a robot creates a human that transcends the three principles on its own, doesn’t that indirectly mean that the robot has violated the three principles? For example, if a robot can harm humans by creating its own artificial person, then the three principles can be easily violated.

 

Even if the actual operation is full of compromises, it should not be allowed to indirectly violate the three principles.

 

In this day and age, it is almost technically and legally forbidden to even clone a human by hand. It is out of the question for a robot to create a human being with its own hands.

 

When I mentioned the question, Riza nodded her head.

 

“Yes, it is. The three principles do not allow a robot to create a real human being. If a human created by a robot is free to violate the Three Principles, then the robot will consequently violate the Three Principles. That potential danger is itself a violation of the three principles.”

 

“Yeah, you’re right.”

 

“So, instead of creating a human, Ira created a human-like robot with biological parts and applied the Three Principles to it. While Ira serves the artificial human by applying the three principles mutatis mutandis, the artificial human obeys the real human, like Lyle as a robot. This prevents Ira from indirectly violating the three principles by harming the real human.”

 

“…I think there’s something wrong with this. Is there a problem…?”

 

It’s a circular hierarchy, but it doesn’t seem to be a problem.

 

In other words, the artificial human will be treated as a human by Ira, but will behave as a robot to me, a real human. They cannot do me any harm.

 

In short, the robots are just playing a game of human-robot.

 

In the end, it was a pity that there were no survivors besides me, only the robots, but at least I would be safe.

 

But then, I’d have to investigate this artificial being and report back to Earth.

 

“That’s not the only problem. Rather, this is where the biggest problem lies. I have been sent a list of all the artificial beings in Astoria. Please take a look at this.”

 

Riza said, turning one of the walls into a display and showing something on it.

 

When I saw what was projected… I froze. It showed the face of something that looked exactly like a human. I don’t know if this was all of them, but there were about 20.

 

Then my gaze was drawn to a point.

 

The sight was so shocking to me that it aroused not only confusion but also a faint passion.

 

After staring at it for more than a minute, forgetting to blink, I asked Riza in a trembling voice.

 

“Are you sure this is an artificial person made by a robot? Really?”

 

“That’s what Ira says. I think it’s important for Lyle-san, so I’ll go ahead and tell her.”

 

I’m glad Riza is an intuitive robot. If she had told me out of the blue, I would have been disgusted and distraught.

 

“…I understand. Thank you, Riza. It’s something I should have been prepared for.”

 

I stared back into the quiet eyes of the girl staring at the display and bit my lower lip.

Table of Contents
Reader Settings
Font Size
Line Height
Donation
Amount
skytl

Translator

Discussion (0)

Log in to comment!

No comments yet. Be the first to comment!