We have maintained that the basic notion of free will is the same for humans, animals and robots. Praising or blaming agents for their actions is an advanced notion requiring more structure, e.g. including good or bad actions or outcomes. Blaming or praising humans requires taking human peculiarities, not shared with agents in general, e.g. robots, into account.
Consider the verdict: ``Not guilty by reason of insanity'' as applied to a person with schizophrenia. Schizophrenia is basically a disease of the chemistry of the blood and nervous system. At a higher level of abstraction, it is regarded as a disease in which certain kinds of thoughts enter and dominate consciousness. A patient's belief that the CIA has planted a radio in his brain is relieved by medicines that change blood chemistry. If the patient's belief caused him to kill someone whom he imagined to be a CIA agent, he would be found not guilty by reason of insanity. If we wanted robots susceptible to schizophrenia, we would have to program in something like schizophrenia, and it would be a complicated and unmotivated undertaking--unmotivated by anything but the goal of imitating human schizophrenia. The older McNachten criterion, ``unable to understand the nature and consequences of his acts'', uses essentially the criteria of the present article for assessing the presence or absence of free will.
I don't know if all praise or blame for robots is artificial; the matter requires more thought. Verbally one might praise a robot as way of getting it to do more of the same.