|
I am a bit confused. The apparent magnitude of a star is given as m = -2.5*log10(F) where F is the apparent brightness of the star. In some textbook F is given as
F = (sum of E_xy) / (texp * A),
where E_xy is the number of counts in a given region around a star, texp is the exposure time and A is the telescope area.
Does that mean that AstroImageJ, for a given user-defined aperture for a given star, sums and obtains a total E_xy, then devides that number with (texp * A) (both quantities extracted from the FITS header) for each image? Assuming this is what is happening, what bothers me, is that once you do the above calculation, you end up with a number with unit: counts per time per area. Then we take the logarithm of a quantity with dimension which is not permissible. What am I missing here? What is the train of correct thoughts? Quite confusing.
|