So there are a several problems here.
1. You didnt give det a size
2. You declared the array types as 1D arrays, but you're trying to access the elements as 2d arrays. You should write:
det <= (a(0)(0) * a(1)(1)) - (a(1)(0) * a(0)(1));
3. Your 2nd assignment to det is going to cause multiple drivers on det, giving you Xs in simulation. Each signal can only have a single driver. This is not software and your code all assigns values in parrallel
4. Using dividers really requires the use of a divider IP core. Using an in-line divide will create a really really slow circuit.
5. You are going to hit overflow and clipping issues. Nbits * Nbits gives a 2N bit number. You're trying to assign a 48 bit number (assuming det is 32 bits and a is 16 bits) to a 16 bit output output.
6. More a style point - you should not do arithmatic with std_logic_vectors. They are not meant to represent numbers, just a collection of bits. std_logic_unsigned library is a non-standard library. You should use unsigned/signed from the numeric_std package.