Extrait :
Reading Group: The Aloof can instantiate intelligences that traverse their network. Is there a way to detect intelligence in a signal? I don't know much about information theory, but I am familiar with the idea of measuring information content of a message. I want to know if there exists something analogous for detecting/measuring intelligence.
If intelligence is computable, am I running into the halting problem by considering how to detect it in a signal? Because I would have to consider if it would halt, right/wrong?
How could I decide that a sequence of bits is executable? If work is done on this topic, where would I go to read more?
Greg Egan: I'm not aware of any rigorous results having been proved about this issue, so what follows are just my own hunches.
I suspect that determining with *absolute certainty* whether a sequence of bits constitutes an algorithm under any interpretation whatsoever would be an intractable problem ... but nevertheless, in practice someone with enough experience and computing resources would actually have a reasonable chance of picking up most real examples. I think that across cultures there'd be a certain amount of "convergent evolution" in algorithm design and encoding, so that rather than every mathematically possible kind of scheme being used, there might only be, say, a few million that would really be sensible. Within each of those, there might still be a large number of completely arbitrary choices, but given a very large example to work from, I don't think it would be intractable to reverse-engineer the whole scheme.
To give a kind of miniature version of the problem, I think a computer scientist in 2090 who stumbled on a binary copy of a 2010-era operating system would have a very good chance of deducing the instruction set of the processor for which it was intended, even without access to detailed historical records about 2010-era hardware. Doing the same kind of thing across cultures would be much harder, but not exponentially harder.
Now, if you accept that you can identify an algorithm in a signal, you're halfway to detecting an intelligent algorithm. Again, I think it's a problem that might be intractable if you wanted absolute certainty, but for similar reasons would still turn out to be possible in a very large number of cases.
I don't think the halting problem is relevant here. If an intelligent being is encoded in an algorithm, you'd expect them to take in sensory data and then respond to it in some way within a reasonably short time, whatever else might be going on in their minds. Even if, say, in the back of my mind I happened to be testing some mathematical conjecture such that you'd need to solve the halting problem in order to know whether I'm going to shout "Eureka" eventually or just go on fruitlessly calculating forever ... if I'm any kind of normal sentient being, I won't be stuck in an insensate trance until the conjecture has been resolved.
