Forum Replies Created
-
AuthorPosts
-
Dr Paul LeylandParticipant
You could try contacting the copyright holder, The Times presumably, and request permission to bring the article to a wider audience.
Dr Paul LeylandParticipantYup.
All the professionals are going to be bugging us amateurs for telescope time because it saturates the detectors on all their equipment.
Dr Paul LeylandParticipantI see Robin and I posted pretty much simultaneously.
The neutrinos from SN1987a came in a clump a few seconds long. The telescope wasn’t very sensitive and the SN was at quite a distance so it’s likely that it saw only the very peak of the neutrino curve.
However, core collapse is a very rapid process, on the timescale of a minute or so (hence my prediction), and there is no obvious intense source of neutrino emission afterwards. Once the neutrinos get outside the core everything else lying in our direction is essentially transparent so they will not be scattered as is the initial burst of photons.
Dr Paul LeylandParticipantIt depends entirely on how you look at it.
Neutrino telescopes will notice a great increase in brightness on the scale of seconds to a minute or few.
Optical telescopes will take a day or few, if the many thousands of other SNe which have been observed are anything to go by.
Betelgeuse already shows a disk if your telescope is good enough. It will show an ever bigger disk on timescales between days and millennia. Compare SN1054, the outside of which is now big enough to have been seen by Messier.
Dr Paul LeylandParticipantGood to see someone is checking my work to guard against errors. I should do the same for that of other workers.
My earlier post gave a time of 1957-May-19 21:35. Agreement is satisfactory.
Dr Paul LeylandParticipantThank you. I’ve read that paper in the past, and imaged M67, but it’s good to read it again. Section 4 (p77) is particularly relevant, especially the comment about the difference between visual and CCD estimates of a 17.4 object when the image goes down to 20 or so. Another apposite comment is on page 79: This magnitude-bridging technique is common in the professional world, as most of the standard stars are too bright for large telescopes.
Please note that for present purposes I am emphatically NOT trying to detect the faintest possible object on the image. I am trying to measure the magnitude of the faintest object which has an error smaller than a specific limit, 0.1 magnitude say. In this case the SNR is way above the 5-sigma limit mentioned in the paper.
As I pointed out earlier, if I used a sequence which ends at 16.9 to measure a variable at say, 19.5 +/- 0.1 that estimate would be accepted without question. Why should a measurement to the same accuracy of an equally bright nearby star be rejected purely because it is not a (known) variable?
Behind all this is my firm belief that one should not throw away data. It should be preserved for later scientists to re-analyse if they wish. For my part I store every image which is not too badly corrupted by focus errors, guidance errors, passing clouds, etc. In only 18 months I already have more than thirty thousand images, together with their metadata in a SQL database. All can be retrieved and re-examined for whatever reason — pre-discovery observations, perhaps, or searching for previously unknown variables.
Don´t misunderstand me: I will continue to play by the rules as they stand but it seems to me that the present rule is extremely conservative to use Arne Henden´s phrase.
Dr Paul LeylandParticipantMy submissions to the DB include measurements of the full sequence. Accordingly, I don´t see why confusion should happen. If the sequence changes, through the addition of fainter members perhaps, all significant information is present to reduce to the new sequence.
For the example given, a snippet of an entry would look like
VarAbsMag VarAbsErr CmpStar RefMag RefErr CMMag CmpErr
[19.6203 0.0123 169 16.862 0.022 16.765 0.0095
where everything other than “169 16.862 0.022” is fictitious, invented as an example, and the CmpStar through CmpErr fields for the rest of the sequence have been omitted here for brevity; they would be present in the true submission.
Dr Paul LeylandParticipantEach of plates 6 & 7 are of Bennett, taken mid-April 1970. Can´t be more accurate without better astrometry.
Incidentally, http://www.icq.eps.harvard.edu/bortle.html is an invaluable resource. I use Norton’s 2000 to convert Bortle’s constellation names into approximate RA/Dec for comparing with plate solutions given by Lars.
Dr Paul LeylandParticipantPrint 8 is also Arend-Roland. Time of exposure is already given on the rear of the print.
Dr Paul LeylandParticipantThis the JPL ephemeris of Comet C/1956 R1 (Arend-Roland) for the night of 1957-05-19/20
1957-May-19 21:00 A 07 03 54.81 +63 27 41.3
1957-May-19 21:10 A 07 03 57.35 +63 27 39.2
1957-May-19 21:20 A 07 03 59.89 +63 27 37.1
1957-May-19 21:30 07 04 02.43 +63 27 35.0
1957-May-19 21:40 07 04 04.97 +63 27 32.9
1957-May-19 21:50 07 04 07.50 +63 27 30.8
1957-May-19 22:00 07 04 10.05 +63 27 28.7
1957-May-19 22:10 07 04 12.59 +63 27 26.5
1957-May-19 22:20 07 04 15.13 +63 27 24.4
1957-May-19 22:30 07 04 17.67 +63 27 22.3
1957-May-19 22:40 07 04 20.21 +63 27 20.1
1957-May-19 22:50 07 04 22.76 +63 27 18.0
1957-May-19 23:00 07 04 25.30 +63 27 15.9The crude position I gave earlier suggests that mid-exposure was close to 21:35 UT.
Lars: do you have precise postions for the other plates? If so, please post them and I’ll happily do the detective work. If not. it will take me a little time.
Dr Paul LeylandParticipantLars didn´t post the precise positions of the comet nucleus. I could do so but don´t know if he intends doing so and I don´t really want to duplicate his efforts. If you have even a rough guess at the epoch for each plate the precise position should nail down the date and time of mid-exposure to a few minutes.
Dr Paul LeylandParticipantI’m no good at identifying comets but I do know how to drive a plate solver. A local install of astrometry.net with the Gaia-DR2 database turns up the following for “Plate 1”.
RA,Dec = (111.067,63.945), pixel scale 20.1561 arcsec/pix.
Field center: (RA,Dec) = (111.072006, 63.929675) deg.
Field center: (RA H:M:S, Dec D:M:S) = (07:24:17.281, +63:55:46.828).
Field size: 3.21749 x 4.45504 degrees
Field rotation angle: up is 114.772 degrees E of N
Field parity: neg
Eyeball astrometry of the image gives a J2000 position for comet’s nucleus as 07:04:43.7, +63:26:31. Please feel free to precess that to B1950 if it helps identify the comet.
Looks like it might be clear again tonight. If so, solving the remaining prints will give me something to do while waiting for the photons to come in.
Dr Paul LeylandParticipantI understand what you are saying but it goes against the grain to throw away information. Here’s my reasoning.
Suppose that on the night in question SV Ari was at V=19.45 and that I had easily enough SNR to measure it to an accuracy of 0.01 magnitude based on an extrapolation of the sequence magnitudes down below 169. I would report a positive result.
Suppose that a field star was also measured on exactly the same image at, say, V=19.61, also to an accuracy of 0.01 magnitudes. I feel I would be justified in recording it as such, if only in my own records. Note whether or not that second star is a variable is irrelevant because it is being measured at a specific point in time.
Now, a week or so later, SV Ari has faded to a true magnitude of, say, V=22.0 which is way below the detection limit. However, that same field star is still measurable on an image taken at the later date. For the sake of example, let´s say it is now measured at V 19.62 with accuracy 0.01, again using only the official sequence. It is quite irrelevant in this particular instance whether that star has truly faded slightly or whether the difference between the two measurements arises for SNR reasons. It is quite clear that SV Ari at this date is significantly fainter than V=19.6. It seems wrong to me to throw away the additional information about the limit on the brightness of the variable.
Please note, in the latter case, I would NOT be using the V=19.6 star as part of the sequence to determine instrumental magnitudes and their errors. All of that is still being done exclusively with the standard sequence through a lengthy extrapolation.
Yes, I´m quite prepared to work with Jeremy and/or the AAVSO to extend the sequence in this case and others. However, prospective additional sequence members will need to be checked that they do not vary significantly on timescales ranging between hours and years before they can be used with confidence. (This issue has already bitten me: I discovered that one of the AAVSO comparisons for V3721 Oph is an EA with minima 0.025 and 0.010 magnitudes.) My suggestion, on the other hand, requires no assumption of constancy, only that the limiting magnitude can be measured at the time of observation.
Dr Paul LeylandParticipantGeosynchronous satellites have been plentiful for decades now. The good thing is that their orbits are known precisely and their ephemerides are widely available so that you can take your images at times when they are absent. Further, they are confined to a small strip of the sky so there are many other areas to image without their intrusion.
The real err… persons born to unmarried parents, are the likes of the Starlink constellation. IMAO, anyway.
Dr Paul LeylandParticipantA NEO will also move during and between exposures but very likely nowhere as quickly and certainly nowhere nearly as bright. Both characteristics are easy to determine in software.
Dr Paul LeylandParticipantYup.
I’m on their mailing list. They appear to have mitigating procedures under development.
AFAICT, they would prefer not to have to deal with it but they are prepared to do so.
One approach is that each field is imaged in several filters. A satellite will have moved between successive images, so a transient still stands an excellent chance of being detected. A problem, as I understand it, is that the calibration images (i.e. those without transients) will require much more care in their preparation.
Dr Paul LeylandParticipantVS observers such as myself are relatively fortunate. We already have to deal with satellite trails and cosmic ray hits but, as long as they don’t lie within the apertures around the VS and comparison stars, they don’t matter.
Those interested in precision astrometry — of asteroids and comets, for example — are in a similar position.
It’s the takers of pretty pictures who will have the real difficulties.
Perhaps more people will become interested in the measurement of images rather than in imaging per se. I’d argue that the former is more scientifically valuable but I’m a scientist and so can be expected to be biased.
Dr Paul LeylandParticipantI can provide my VS pipeline to anyone who may find it interesting and/or useful. The stages run as follows, where the bolded terms are my scripts, most of them in Perl, one in bash:
- Download a CCD photometry HTML file from AAVSO’s VSP page, then run it through CCD2APT which produces two files, one a source list for APT and the other a comparisons list for APT2VSS (described below). This needs be done only once per variable.
- (Optional but recommended) Check the images for plausibility — correct field, not trailed, no inconvenient satellite tracks, etc. I use the ds9 viewer.
- Put a WCS on the FITS files with solve-field from a local installation of astrometry.net. For efficiency reasons I store a pre-parameterized call to solve-field held in a trivial shell-script. That for SV Ari reads:
solve-field -O -p -L 0.25 -H 1.5 -u app -8 pos -3 03:25:03.34 -4 +19:49:52.9 -5 2.0 -z 4 *.fit
- Remove all the extraneous crud that solve-field leaves behind with clean_solve.
- Co-add if desired, using SWarp with COMBINE-T = AVERAGE
- Use APT for aperture photometry, either interactively or scripted. APT is very easy to script.
- APT2VSS converts APT’s CVS-format output into a TSV file ready for submission directly to the BAA-VSS database.
- (Optional, every so often) Accumulate all the individual TSV files for a given variable with merge_phottsv into a multi-record file suitable for BAA-VSS submission.
- (Optional, once a month) Run all the BAA-VSS files created in a given month through VSS2TA to produce a file in the format which The Astronomer prefers.
Scripting the core of the pipeline is itself trivial.
Dr Paul LeylandParticipantI tried all combinations I could think of. The macros (i.e. the ones supplied by the VSS) did not then work with Libre Office.
The bulk of the images I processed were high-cadence photometry of V3721 Oph. I’m the L in CHL, the others being Phil Charles and Kevin Hills.
Dr Paul LeylandParticipantI’m now looking at the Virtual Moon Atlas which displays the appearance at specific times and dates — which means that the libration and illumination are depicted realistically.
The “Dark feature” is a combination of Peary (closest to the limb), Byrd and Gioja. All of these are well above 80N. The southernmost, Gioja, is at 83.346N; Peary is 88.625N.
Your “dark-floored crater” is Scoresby and the atlas shows its floor in deep shadow at 2019-12-08T22:35Z (my guess at your time of observation). The “craters” between Anaxagoras and Scoresby are Goldschmidt D and a peak for which I have yet to find a name. The pair to the north of Scoresby might be Main and Main A or perhaps Main L. The “very bright floor” and the neighbouring features on your sketch appears to be an extremely eroded crater which does not seem to have been named.
The attached image is a screenshot of the area in question as it appears on the Virtual Moon Atlas for the time and date in question. The red square marks Goldschmidt D
-
AuthorPosts