Autonomous cars don't rely on mapping to navigate. They constantly sense everything going on around them and can see way more than a human can. There's nothing about a construction site or changing road markings or potholes that would confuse an autonomous car any more than it would confuse a human driver.
If the road markings and construction workers are behaving in an and well-defined expected manner, sure. But even human drivers regularly get confused when navigating construction sites.
A huge problem with AI and ML, especially pertaining to autonomous driving, is that these systems can generally only react to a situation,
if they've seen them before. Novel situations can absolutely trip up AI piloted vehicles. This is a critical problem, because human drivers rely on a lot of poorly defined, implicit communications to navigate the road safely.
For example, say you come up to a construction site on a single-lane road, and a construction vehicle immediately in front of you needs to back up for whatever reason. The construction vehicle stops in the lane, and there's a construction worker off to the of the road side wildly flailing his arms in a manner that human drivers would interpret as, "we need you to back up". Under current AI and ML implementations, the vehicle would be stuck. It would not understand why the truck stopped, it would not understand what the flailing arms mean, heck, it likely wouldn't even understand that the construction worker was even trying to get the attention of the "driver" in the first place, let alone even beginning to interpret what the heck flailing arms mean. The car would be a sitting duck dead in the road. This is just one example, but there are infinitely many conceivable situations where an autonomous vehicle might get stuck because it doesn't understand implicit social queues.
Further, variances in local culture mean that identical actions taken by human road users in different geographic locations, might have very different interpretations in terms of social queues. For example, in much of the world, it's customary for pedestrians to just wander onto a street, with the expectations that the cars will accommodate their presence. Elsewhere in the world, pedestrians would only do that if they had a death wish.
This is such a big problem, that some have even suggested that companies operating fleets of autonomous vehicles might need to employ an army of remote workers to pilot the vehicles, when they come across situations that they cannot navigate on their own. Others have suggested that once autonomous vehicles become a large portion of vehicles on the road, the laws regulating road use might need to change to completely eliminate unpredictable human behaviour. In the case of a construction site, for example, this would mean that the boundaries of every construction site would need to be very well defined, in a manner that a computer could understand and navigate around. Haphazardly throwing pylons around the street, with poorly laid out lane markings would not be adequate. This could even lead to some rather draconian measures,
such as outright banning jaywalking.
The current state of Artificial Intelligence today should be thought of as being conceptually similar to the intelligence of a simple insect, such as a fly. A fly is absolutely an intelligent agent. It has goals, it can systematically and efficiently work towards this goal, it even has rudimentary problem solving abilities. However, like our autonomous cars, fly intelligence is only effective when dealing with familiar and well defined problems. Give a fly a novel problem, such as putting a glass wall in between it and its destination, and the fly will be hopeless to find a solution.