U3AOS1 Topic 4: Calorimetry


U3AOS1 Topic 4: Calorimetry

Calorimetry is the method in which heat energy released or absorbed is experimentally measured via a device called a calorimeter


This is usually to test the energy content of food and the energy released by fuels such as ethanol ect

Does calorimetry sound oddly familiar to the everyday word - calorie?


Calorie is just the unit of energy required for a fuel to increase the temperature of 1g of water by 1°C or K at SLC


Basically calorie = specific heat capacity of water at SLC (Standard Lab Conditions)

Thus approx. 4.18J per calorie


[Insert calorimeter diagram]


Study design dot point:

  • the use of specific heat capacity of water to approximate the quantity of heat energy released during the combustion of a known mass of fuel and food


As per the law of energy conservation, the energy released or absorbed must be gained or lost from somewhere

That’s where water comes in. Temperature change is determined using water

q=mcT, q is heat (J) , m is mass (g), T is change in temperature (°C or K)

T=Tfinal-Tinitial


There are two types of calorimeters:


Bomb calorimeter

Solution calorimeter

Primarily for combustion reactions

Used for reactions requiring solution

Exothermic

Endothermic and exothermic


The energy content of food can be determined by relating the energy calculated to the mass of food combusted.


Energy content=qm=energy absorbed/released by watermass change of food


However, calorimeters are not 100% efficient, there is always going to be some heat gained or lost from the external environment and not every calorimeter is the same. Some may be in different conditions or models than others.


So how do we account for that?

We calibrate the calorimeter which accounts for the inaccuracies involved with calorimetry from the heat loss or gain from environment


Let's say we input a known 200J of energy, the temperature of the water measured by the thermometer increases by 5°C

How do we build a relationship off that?

Well we have to find how many J per degree change in temperature


200J5°C=40J/°C


Therefore for every 40J inputted, the temperature of the water increases by 1°C

This relationship is called the calibration factor (CF) and is specific to each individual calorimeter to reduce inaccuracies

CF=qTduring calibration


Now how do we input a known amount of energy?


We primarily use a heat rod with energy inputted controlled by the voltage, current and time

E=VIt 

Note: q means heat whilst E means energy, as heat is a type of energy both can be substituted for each other

Treat q=E


Or we use a known amount of high purity compound such as benzoic acid

q=Hn, where H is change in enthalpy of high purity compound and n is number of moles of the compound


Hence substituting each formula into the CF formula we get:

CF=VItTduring calibration and CF=HnTduring calibration


Even after we input a known amount of energy, this still doesn’t account for the inaccuracies caused by the environment



Study design dot point:

  • the principles of solution calorimetry, including determination of calibration factor and consideration of the effects of heat loss; analysis of temperature-time graphs obtained from solution calorimetry


We combat this tribulation by graphing the reaction process and extrapolating the results using temperature - time graphs


[insert temperature - time graph perfect + imperfect]



So what do we do after we find the calibration factor?


Now after calibrating, we can test the fuel source. If the fuel source results in a certain temperature change of the water, we can relate that to the energy released


Eg. If after calibration we found that 100J results in a temperature increase of water of 1°C, after we burn 2g of muffins results in a temperature increase of 10°C, then we know each gram results in a temperature increase of 5°C. Thus corresponding to 500J released from burning muffins. 


Here is a formula to formalise the above:

Ereleased from fuel=CFTafter calibration


Note: This is where many students often get confused

The T value from calibration is different to T value after testing fuel


I like to think of it as two separate steps


  1. Calibration (using ΔT during calibration)

  2. Testing fuels (using ΔT after calibration)


This is all to ultimately test the energy content of the food

Hfood=Ereleased from fuelmfood


As calorimetry is very practical based and is just an experiment, VCAA loves to test this topic in an experimental based approach


For example they like to ask:

What are some ways to increase the accuracy of a calorimeter?


  • Use a tight lid

  • Improve insulation around calorimeter

  • Use digital thermometer

  • Use more pure benzoic acid


J.D’s normal tip: Just brainstorm as many ways as possible to limit heat loss


You must also know the cause of overestimation and underestimation of heat content of fuel


Overestimation:

  • Less water than required

  • Non-homogeneous fuel mixture


J.D’s special tip: to brainstorm overestimation causes easier, link to what changes would result in a greater change in temperature of water


Reasoning behind this is:


Hfood=Ereleased from fuelmfood

Hfood=CFTafter calibrationmfood

HfoodTafter calibration


Underestimation:

  • Loose lid

  • Analogue thermometer

  • Poor insulation material


We want lower energy content hence corresponding to lower change in temperature of water after combustion of fuel.

Source: (JavaLab - Specific Heat, https://javalab.org/en/specific_heat_en/)

Confident?

Select quiz difficulty