If we were to draw a picture of the joint distribution of these variables \(x\) and \(y\), we would get something that is 1 inside the \([0,1] \times [0,1]\) square and 0 outside that region.
For a given value \(\alpha\) of the sum \(x + y\), there is a diagonal line segment where \(x+y=\alpha\) and \(x\) and \(y\) are in the square. Where \(z \le 0\) or \(z \ge 2\) that intersection vanishes and for \(0 \lt z \lt 2\), that intersection varies in length. The probability of the sum having some particular value z is proportional to the length of that intersection. As you can imagine, the intersection varies in size linearly and it reaches a maximum where z = 1.
For the sum of three random variables, the situation is more complex to reason about geometrically because we need to worry about the intersection of a plane and a cube. For more variables, the geometry is not worth the trouble.
If we tackle the problem a bit more rigorously, then the easiest way to approach the problem is to compute the cumulative distribution of various values of sums. That leads to a convolution integral over the density functions involved. Since the densities are all 1, the integration limits are the key to the value and those limits have to broken down into cases. Actually doing these integrals is a pretty rare activity since the limit is approximated so well after just a few iterations.
Just how quickly that convergence happens is can be seen by looking at the empirical distribution of the sum of three uniform deviates. I used something very like this R code to produce the graph below:
In this graph, the red curve is the normal distribution with the same mean and standard deviation. As you can see, the peak is a tiny bit too high and the tails are just a skoshi too long for the normal to be a good description of the samples of the sum.
This is, however, just the sum of three random values. If you sum more values, the convergence to the normal distribution is very strong and by the time you are adding six uniform random values together, the difference between the distributions is no longer visible in a graph like this and can only be detected numerically using lots of data and clever things like a Kolmogorov-Smirnov test.
The moral here is that there isn't much way to avoid this regression to the normal distribution and distorting the data to avoid it is probably pointless.
But if you are like me, being just a little more normal always made it easier to get along.