Cruts, hearing loss is more complicated than simple response graphs, actually as far as voice distinction goes there's a lot of information in the phase difference from left and right ears, and the primary 'phase encoder' for the human ear is the outer ear structure itself (it's funny lookin for a reason)
This is why the that hearing 'band' that was refrenced uses a microphone array and complex DSP electronics, to attempt to recreate the phase differences needed for the brain's voice recognition section to home in on the components it needs to hear the human voice. It's not about simple directionality because the structure of the human ear has some very complex phase/frequency changes on the audio that comes into it, and directionality isn't the only reason. The outer ear actually forms a tuned waveguide in the forward direction for some frequencies and harmonics, has a lot to do with how we pick up voices; Why do you think the first response any person makes when they hear someone's voice is to look directly at it. Basic time of arival is fine for direction sensing, but the harmonics in the human voice can be seperatly processed, which is why we can hear people over ambient noise in some situations even if their voices isn't.
Otherwise it would be that simple. The problem with hearing aids is in order to work 'properly' they would have to record the sound phase perfect from WITHIN the ear canal. The best ones do. The best ones also cost more than a decent used car. Some advanced 3D systems have used recordings from within a generic human ear mockup with good results, however frequency responce from typical microphones and speakers able to be placed within the ear canal lack.