The TCP/IP stack controls how the data is split up and it's size, the application can however effect this.
Just as an example a poorly designed RS232 to USB converter (you can liken that to Ethernet they work along the same approximate principles with packet transfers) might only transmit one character from the uart at a time leaving the massive frame overhead for every character sent, which would be a huge chunk of the available bandwidth, where a good design which properly buffered could transfer dozens if not hundreds of times more information by buffer a full TCP/IP packets size worth of data before transmitting (with a timeout delay of course) There are software methods to determine the TCP/IP stack size which can be worked into software design if optimization of that level is required however it almost never is.
It depends completely on the specific application and how it works with the TCP/IP stack together, neither one is the deciding factor alone.
There are some cases when a network may internally split up incoming packets to match frame sizes but that's outside the scope of this sim