If I were doing it, I think I would opt for solution number 1. It's simpler, and results in far simpler queries. Furthermore, if you think about the code you'll have to write to maintain the many-to-many join your second solution requires, I think you will agree that it is far more complex. The user interface for such things can get convoluted, too.
I base my opinion mostly on the following: Disk space is cheap, user time waiting for joins to finish isn't.
Joins are a necessary part of life, but let's not go overboard.
How many times, really, is "Edward" likely to be repeated in your database? And even if it is repeated 100,000 times, you've used up a bit over a half a meg of space to store all of them. Big deal. If you consider the entire database, how many names are likely to repeated overall? What is the actual probability that any given name is likely to be repeated?
Let's do a bit of math. Suppose you have 100 million names in your database. Let's be generous and say each name on average requires 30 characters. You need only 3 Gigs to store those names. How many duplicates are in those 100 million? 30%? I doubt its anywhere near that much, but let's say 1/2 the names are duplicated. So, the space required to store the artificial keys in the many-to-many table would be 4 bytes times 100 million on one "side" of the table and 4 bytes times 50 million on the other, and the space reduction by removing the duplicates would be 1.5 gigs. So we have 1.5 Gigs to store the unique names plus 600MB for the many-to-many table plus some overhead in the table itself which I'll ignore. So fully representing the names takes 3 Gigs, and the many-to-many approach avoiding duplicates takes a bit over 2 Gigs. Now, things are a good deal more complex than I've represented, as presumably some names would result in a higher savings. Thus, if there a lot of criminals using the alias "John Smith", then the savings would be higher. But it would only be higher on the 'alias' side of the many-to-many table. You still have the fixed "overhead" of 100 million people on the 'people' side.
But all in all we're still talking a few gigabytes one way or the other. I think that's a worthwhile tradeoff for the complexity of the queries and the code to maintain the data in the structure.
I'm not sure how the second solution handles Mr. Kelley "better" than the first.
If you insist on using the many-to-many table, I would recommend staying away from your alternative 2. Constructing somebody's fullname would require four self-joins, possibly 3 of them outer joins, for each name you want to present. Yuck.
Listen to your Senior DBA. He knows what he's talking about.
Custom Apps, Inc.