It does help to understand where you stand on these matters, thanks. I will try to answer point by point.
First, I am a believer that humans are just very advanced biological machines. Meaning that, with sufficient technology and proficiency, we will one day be able to create machines that are equal (or more) than a human in every aspect possible. In other words, I don't believe we have anything intrinsically unique or unreplicable, such as a "soul", "energy", "love", or "counciousness". That's my point of view and you have every right to oppose it.
If you accept this mindset, lying is just a matter of implementation. There is nothing inherently human to it that no machine could ever copy. Also, if you believe humans can be ethical then machines can be ethical too, including handling nuances such as changing language or tone.
However, it is important to realize that "lying" has an important temporal component: consequence. Consider an AI weather forecast that said tomorrow will rain. One day passes and there is no rain. Did the machine lie or just mispredicted? The usual answer is the latter. For it to be the former, the machine would need to care about (1) to who it lied to, (2) why it lied, and (3) to what end. If the machine is unaware of these, there is no point in calling it a lie. Considering modern day technology, most prediction algorithms don't even have memory, so how could they ever lie if they never keep track of what or to who they lied? In other terms, how can a machine be flagged as a liar if it has no idea of the consequences of its answers?
The closest set of models we have today to something that could lie are conversational models (such as GPT or BERT) which employ a context vector. This context vector has "past information" that can be used by the model to shape its answer. This gives the model an idea of to whom it is talking to. However, the model still lacks any idea of the consequence of its answers nor has any form of persistent memory that is shared across conversations. Therefore, I don't believe these models (at their present state) could ever be considered to be lying.
Finally, going back to your first paragraph, I haven't seen any work on constraining models to ethical behavior. What I do see are works that try to address bias in the data. And, the reason we don't see ethics being handled that way at the moment, in my opinion, is related to the lying discussion. Current models simply have no "state". They don't recall what they said nor care about what happens next. They are so detached from reality that we can't even consider them to be actors and, hence, there is no action to be judged. What we can do, however, is try to improve theirs answers.