Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

High data rate means smaller spreading factor

Status
Not open for further replies.

EngIntoHW

Member
I read in different articles that in WCDMA, when a transceiver is to transmit a high-spreading factor code (a code with a large length), it transmits it in low data rate, as oppose to a low-spreading factor code (a code with a smaller length), which is transmitted in larger data rate.

What is the reason behind it?

Thank you.

*In the below article, they call "spreading factor" as "processing gain".
untitled-png.46665
 

Attachments

  • Untitled.png
    Untitled.png
    31 KB · Views: 3,497
Last edited:
The higher the spreading factor the higher the coding gain. It gives better sensitivity at expense of lower data rates. Low spreading factor requires more power to accomplish a satisfactory bit error rate.

This is all under the usual restriction of a given bandwidth.

CDMA builds the recovered data bit by correlating chips of higher rate data. The high rate data in this case is the the unique spreading psuedo random sequence that is exclusived OR'd with the original true data bit.

In CDMA, much of the 'noise' in the signal to noise ratio is uncorrelated noise from other users on the RF channel. Traditional way of specifying sensitivity as uV's for a given S/N ratio doesn't have much meaning in CDMA shared RF channel. Other user's raise the noise floor of the communications channel.

Low spreading factor (high real data rate) quickly falls apart as more user access the channel. With only 4 to 16 spreading chips to correlate it becomes hard to distinguish a given user's original data from others.
 
Last edited:
Hi, :)
Thank you very much.

The higher the spreading factor the higher the coding gain.

I agree, as the code more unique, therefore it's more likely to be orthogonal to other codes in the environment.

It gives better sensitivity at expense of lower data rates.

That's exactly the question, why is this trade-off?

Low spreading factor requires more power to accomplish a satisfactory bit error rate.

Got it, in order to increase the Signal to Noise ratio, isn't it?
 
Last edited:
Hi, :)
That's exactly the question, why is this trade-off?
QUOTE]

Not sure I understand the question you are asking. It all depends on energy per bit. (not chip) Energy being power over time period. CDMA allows a given amount of bit energy to be spread over any amount of spectrum. Even though the energy per Hz goes down for wider spectrum, the energy per re-correlated bit remains nearly the same. Raise the data rate and you must raise the power to maintain same energy per bit.
 
Last edited:
Hi RCinFLA,
Thank you very much for your very kind help.

I understood pretty well your first post.

I'm struggling with understanding your second post (after reading it many times and comparing it with everything I read so far in the web).

1. Just to make sure I got the difference between data rate (bits) and chip rate (chips):
Data rate = the rate the input data changes (input data like voice).
Chip rate = rate spreading code changes.

CDMA allows a given amount of bit energy to be spread over any amount of spectrum.

Raise the data rate and you must raise the power to maintain same energy per bit.

2. Please let me know if I got you on this.
Raising data rate => more bits were gathered over a period => if power hasn't changed, then there's now less energy per bit.
Is that right?

3. With the fact of Raising data rate => Raising power, how you derive from this that for high data rate, low spreading factor (coding gain) is used?
 
Last edited:
1) yes, key to understanding CDMA

2) yes

3) Need to include interference. This is key to cellular CDMA. CDMA bit error rate is based on Signal + Noise + Interference / Noise + Interference. CDMA cellular is an interference limited system.

This is one reason CDMA is liked by cellular operators. GSM is like an old telephone switchboard. Once the last phone jack is filled with a phone plug connection the system is at capacity brick wall. There are no more lines available to handle callers.

In CDMA there is a soft degradation in call quality as more phone calls are added to an RF channel. If there is an overload surge of users at a certain time of day the CDMA cellular system can trade off quality of call for more system capacity. Generally, people accept some background burps of interference over being denied a connection with a busy signal.
 
Last edited:
3) Need to include interference. This is key to cellular CDMA. CDMA bit error rate is based on Signal + Noise + Interference / Noise + Interference. CDMA cellular is an interference limited system.
Yes, the noise comprises:
1. Background noise
2. External noise.
3. Other cells interference
4. Other users noise


This is one reason CDMA is liked by cellular operators. GSM is like an old telephone switchboard. Once the last phone jack is filled with a phone plug connection the system is at capacity brick wall. There are no more lines available to handle callers.
Yeah, just like I read, the GSM is TDMA based - each channel communicates in its own time slot - and the time slots' quantity is limitted.

In CDMA there is a soft degradation in call quality as more phone calls are added to an RF channel. If there is an overload surge of users at a certain time of day the CDMA cellular system can trade off quality of call for more system capacity. Generally, people accept some background burps of interference over being denied a connection with a busy signal.
I understand what you're saying:
More users => More interference => Lower quality
However, how is it related to setting a small coding gain when data rate is high?
 
Last edited:
EngInto is gonna give me a migraine... this is the THIRD time Eng has asked this question, twice I've tried to explain it, both times they said they understood, and here it is again.
 
Hi Sceadwian,

I'm sorry if I hurt you in any way.

I've been reading a lot about cellular communication which help me a lot understanding what RCinFLA talked about.
As I told you, I just can't put in one sentence why high data rate leads to low spread factor (coding gain as FCinFLA named it).
 
High data rate means the spectrum is already saturated, if you try to use a high spread factor the signal will be stretched out of it's possible spectrum allowance.
 
High data rate means the spectrum is already saturated, if you try to use a high spread factor the signal will be stretched out of it's possible spectrum allowance.

Hi Sceadwian.

Please tell me if that was what you meant.

SF = Chips rate / Data rate

Chips rate is constantly equal 3.84MHz, right?

So:

SF = 3.84MHz / Data rate

Therefore, high data rate means low SF, is that how you reached this conclusion?
 
Last edited:
Yes that's the gist of it.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top