You have to remember, '0' is one of many things:
It is part of the std_logic definition from std_logic_1164:
type std_logic is ('U', 'X', '0', '1'....etc)
it is also part of the bit and character types from std.standard:
type bit is ('0', '1');
type character is (a load of chars, including '0');
so, having '0' on its own is already 1 of 3 things.
Then you have these things:
from ieee.std_logic_1164:
type std_logic_vector is array(natural range <>) of std_logic;
from ieee.numeric_std:
type unsigned is array(natural range <> ) of std_logic;
type signed is array(natural range <> ) of std_logic;
then there is string which is an array of chars, then bit_vector which is an array of bits.
so as soon as you write ('0', '0'), the zeros on their own already have ambiguity, and when you put these ambiguous things together it forms one of many different array types. Without telling the compiler which one it has no idea which one you mean.
Now, what you could have done was have an temporary signal for the qualification:
signal s : std_logic_vector(1 downto 0);
..
(s2, s2) <= s;
s <= ('0', '0');
Here there is no ambiguity, because s is a std_logic_vector, so it can assume that the vector you are making with ('0', '0') is also a std_logic_vector.