In the absence of any other model, it is currently fashionable to posit that consciousness and intelligence “emerge” out of the complexity of life. Our brains allow emergent intelligence to evolve. What this means is anyone’s guess.
Let’s assume that it’s true, that out of sufficient complexity, consciousness and intelligence can emerge. How would this work? Would it require evolution in its traditional sense, inheritable change modified either in gradual ebbs or brisk saltations? Would an entity sufficiently complex be able to change itself so that consciousness emerges?
Ethical decisions are based on core values and models that guide decisions based on outcomes. In an artificial intelligence, how would these values be determined? How would different values be weighted? What would be the outcomes?
See Emergence for my take on it.