The argument about whether humans necessarily have superior minds to robots is unique among philosophical arguments in getting far into mathematical logical technicalities. This is not Penrose's fault. What machines can and cannot do in principle really is a technical logical question. Here's how it gets messy.
A. Whatever formal axiomatization of arithmetic the robot uses, Gödel's theorem shows how to construct from that axiomatization a sentence that is true if that axiomatization is sound but which cannot be proved in the axiomatization. This can be done in Turing's (1940) way or in Feferman's (1962) way. Both are discussed in [Feferman, 1988].
B. Yes, but the construction of this sentence is accomplished by a program the robot can also apply either to its previous system to get a new one or to a system used by its interlocutor.
A. This process can be iterated through transfinite ordinals, and the ordinals the robot can use will have an upper bound. The human can in principle determine this bound by inspecting the robot's program.
B. To iterate through ordinals requires ordinal notations. These are notations for computable predicates, but it is necessary to establish that the computation really produces a well-founded total ordering. Thus we need to consider provably recursive ordinals. Then we need to ask what axiomatic system is to be used for these proofs.
Moreover, the new axiomatic systems obtained by the iteration depend on the notation and not merely on the ordinal number the notation determines.
To me, and maybe to Penrose, it is implausible that the possibilities of human thought, except in recursive function theory, can depend strongly on these advanced considerations.