The procedure is how you would go about getting the raw data.
Calibration is converting the raw data into useful standardised data that can be applyed to a range of situations (not just the experiment).
E.g. I want to find how the magnetic flux density varies with the distance from a pole of a magnet, Using a hall probe.
The procedure would be measuring several distances (e.g. 0cm, 5cm, 10cm...etc) and taking readings of the current indicated by the hall probe at each distance (e.g. 50mA, 30mA, 10mA...etc). The current readings would be your raw values.
Calibration would be appling correct physics - that current is a function of magnetic field strength.
Thus to convert from mA to T you need a current reading from a hall probe in a flux density that is known. e.g. place it next to the pole of a 0.5T electromagnet and find the reading in mA (e.g. 150mA). Then applying correct physics with this conversion factor you can convert the raw mA values into T values and plot a graph of the data. This not only allows a relationship to be established, but also an interface applied to the hall probe so that it automatically displays flux density for any magnet rather than current using the derived conversion factor.