My old phone, which did make calls
Despite (now) having a phone more powerful than Apollo 11, I can barely make a call. In fact, the more sophisticated my phones get, the worse they are for talking. Voice in general is neglected tech. While video moves into HD and higher levels of quality, voice is still not much better than tin cans. In fact, cell phones often sound worse than old fixed lines. If you try listening to music or something on a phone you realize how crap the signal is, far far worse than MP3. However, humans seem to tolerate far more degradation in sound than in other sense. However, one would think that voice would advance just a little bit in quality. But it doesn’t.
Despite all the features available now, I still find it harder to talk on the phone. There is background noise canceling technology, but it never really caught on. Instead phones get more and more like computer, but less and less useful as phones. Now computers are becoming netbooks and smartbooks and phones are becoming smartphones and iphones, meeting in the middle. However, there is little or no innovation in voice. No higher bitrate, no stereo, no noise-canceling, etc. I suppose there’s no demand for it, but it’s a strange technological dead end.
Common mp3s are actually sound worst than cassette tape. I think we lost our taste in sound over the time for some reason. May be we are too busy watching piano playing cat at youtube.
Whateva u say, vinyl still rocks!!
As for the call quality it’s just your/my phone mate. Nokia’s have very decent call quality and speakerphone capabilities. :) But agree on the whole technology forgetting the basics concept. :)
Sam,
I don’t know where you’re getting your facts from but a compact cassette sounds noticeably paler in comparison with the average mp3, which, at worst, is at 128kBps. Anything lower is just inexpert rips.
But that’s besides the point.
Audio quality on phones as far as I’ve experienced has been good enough for voice. I was using dialog till a while back, and they had consistently stable connections and… legible voice. Airtel on the other hand was terrible till very recently. Now they too seem to have a decent call quality.
As for overall sound quality, people don’t NEED that high a bitrate when merely talking to each other. As long as you can make out what the other person is saying, I really don’t care. The networks don’t want to give more bandwidth to each call either because that would reduce their capacity.
as phone’s become more and more like computers, that actually allows us to increase call quality
just imagine using Skype on your phone?
instead of GSM or whatever it is we’re on if the calls were transmitted over 3G or HSDPA, we could probably use mp3 or ogg streams of much better quality, instead of radio signals.
only thing we need for that is for all phone’s to be 3G or HSDPA enabled, and pretty much a lot like a computer :P
but yeah, it is true, human’s are neglecting sound, morons. :/
There are so many things wrong with this comment …
Do you know how much computational power is required to transcode something into mp3 or ogg? I thought not. Quite apart from the fact that there are better compression algorithms for voice alone (voice does not need a full frequency spectrum, which allows far more efficient lossy algorithms to be applied)
lrn2CompSci, kthx
Guys, seriously?
– Bitrate at the numbers you’re talking about is completely irrelevant when it comes to voice. A CD with scratches on it ripped at 320kbps still sounds horrible.
– The limiter is usually noise on the link/signal, nothing to do with bandwidth. High/low pass filters usually trim voice down to 1500Hz (may have gotten the number slightly off, it’s been a few years) and it still sounds tolerable. No one in their right mind will allocate more bandwidth (and less is always possible).
– For reference, most TTS (text to speech) implementations generate good voiceprints at 8k (that’s your line bandwidth – see the Shannon reference later on, there are TONS more where that came from, this is introductory material for undergrads).
– Noise cancellation works at the point of delivery. It has nothing to do with your telecom provider (seems obvious to me, but just in case you think Dialog or Airtel can do anything about it. They really can’t – they cannot predict the line noise profile at the point of emanation)
– Noise cancellation algorithms with a single mic has been around for umm… 50 years, almost. All those algorithms are tuned towards a certain noise profile. It breaks down (or becomes computationally expensive – more battery consumed, the device runs hot) when the noise profile deviates from the tested norm.
Reference: http://en.wikipedia.org/wiki/Kalman_filter (first published 1960)
– The Motorola CrystalTalk technique (according to patents) is interesting because they use the only viable method possible today, dual microphones (Bose/Sennheiser uses something similar for their headphones – use multiple sources to cancel background noise). There is still a cost associated with noise cancellation in terms of battery life and processing power.
More references: bitrate is determined very simply by a series of formulae published 60 years ago. http://www.scientificamerican.com/article.cfm?id=claude-e-shannon-founder
Work on voice quality is far from stagnating – places which rely on voice recognition use modified Kalmann and other algorithms all the time (The Wikipedia link has a list of alternative Kalman class algorithms. It’s a bit out of date). The reason you do not see them on consumer level phones is the same as everywhere else – there are cost and other tradeoffs associated with incorporating those features into the device that are simply not worth it for the average consumer.
I haven’t bothered with many links on this comment, but if anyone finds an alternative view – please publish a refutation with cites and we’ll talk.
As usual we have no idea what you are talking about as you make another futile effort to display your perceived technical competence.
Human voice frequency response is between 30 – 3300Hz , which basically says it has to be sampled at 8Khz to avoid aliasing. Each sample is digitized with 8-bit code. This brings maximum bandwidth for the human voice which is 64kbps. This is called PCM or G711. This is the default codec most of the network operators use.
So from early days PSTN network needed 64kbps to carry the single call. So most of the current PSTN networks are designed and built to carry above codec. It doesn’t matter what handset you use, the current network won’t be able to support higher quality codecs on their core network.
But there is a catch since the Sri Lankan Mobile usage is highly densed. Network bandwidth exhausted (Remember this bandwidth is limited for each carrier). So if they want to carry more traffic they have to reduce the bandwidth for each call. How do they do that? Well to do that they have to change the Voice Codec to lower quality which uses low bandwidth. Example if they use G729 codec they can carry 8 calls in the same 64kbps stream. This might be the reason you may notice the lower quality in calls in different carriers. The alternative is to put more Cells (Towers) to increase the capacity. As you know this is expensive.
The good news is there are HD Voice Codecs out there (G722). And sound quality is very good and it feels like you are talking to the end party in person. But I am not sure how long it will take to roll it out. Since most of the core network hardware has to be upgraded to support this.
The cat is CUTE! Is he yours? What’s he called?