I'm quite sure autonomous vehicles will be more intelligent than human drivers in such conditions.
I'm sure they will be - and also MUCH stupider than human drivers in other conditions. At the moment they're like the T-Rex in Jurassic Park: if you're not moving, they can't see you. It's how they determine what's just background, and what they need to watch out for - and it works right up to the point they accelerate into a stationary fire truck.
https://www.wired.com/story/tesla-autopilot-why-crash-radar/All visual systems have bugs, as evidenced by various optical illusions, change blindness, etc. Machine vision is no exception. What fools us won't fool a car, and what fools a car won't fool us. And that's just
detecting the surroundings, let alone appropriate decision making.
https://nerdist.com/trap-a-self-driving-car/Then there's the security perspective. Do I trust any computer manufacturer to make a system that can't be defeated by appropriate malware? Malware that could be as simple and portable banner flung over a bridge with an image that crashes the vision software at 70mph...
I don't think the future will be autonomous, ever. Rather, it will be a case of mutual assistance. Each different technology - human wetware vs machine circuits - will have its own "blind spots". Use both, and they complement each other's deficiencies. So you get ABS, lane monitoring, intelligent routing, traffic updates, etc - but you also have a human in the loop to see the things the car can't, and cope with the stuff that's outside its programming.