Forum Replies Created
-
AuthorPosts
-
Duncan Hale-SuttonParticipant
I was very sorry to miss this meeting but I had a prior engagement that would have been difficult to break. I hope to join you all another time!
Duncan Hale-SuttonParticipantI was pleased to see that Z UMa has been highlighted in a recent facebook post. I have added the light curve of this variable star from this post below. As described, in the last year the maxima have become nicely double peaked and it has been suggested that this is due to the interaction of the two main periods of pulsation of this star. However, how does this square with the analysis of John Greaves that after 1995 there is little evidence for two pulsation periods in the data (see my earlier posts on this topic)?
I wondered if the timescales for the evolution of the double peak fit in with the two periods of 194.0 and 204.8 days that John measured before 1995. If you imagine that the two pulsation periods start out in phase, then after one period of each the peak of the 194.0 day pulsation will be ahead of the 204.8 day pulsation by 204.8 – 194.0 = 10.8 days. After n such periods, the shorter pulsation would be ahead by n x 10.8 days. Looking at the light curve for the star you can see that for the last double peak (September 2022 to November 2022) the two humps were separated by about 3 months or roughly 60 days. This would seem to imply that the two periods would have been in phase n = 60/10.8 ~ 6 periods earlier and this certainly seems feasible looking at the light curve.
So how often would we expect double peaks to be occurring if the two periods of 194.0 and 204.8 days persist? The simple linear addition of two sine curves of the same amplitude but different periods of A and B would result in another sine curve of period 2AB/(A+B) which in this case would be 199.3 days but the amplitude of this would be modulated by a cosine curve of 2AB/(A-B) which is 7,358 days or about 20 years. However, a cosine curve passes through zero twice every period, so the “beat” frequency is half this, i.e. every 10 years. This is only a very rough guide because I am sure that the behaviour is not linear but it seems reasonable given what I said in my earlier posts about how often double peaks occur. Interestingly, the amplitude of the light curve variations does decrease as a double hump approaches.
What is says to me is that we really need more accurate measurements of the light curve to better see what is going on. Have the two pulsation periods begun to die out as John says or are they being hidden in the noise of the data? Perhaps we need regular digital detector monitoring over the next 20 or 30 years to decide this. Anyone fancy this?
Attachments:
Duncan Hale-SuttonParticipantIt seems that the Universe may contain a lot of quantum entanglement. This must be a form of information and information bits have an energy cost. As energy equates to mass, I wonder if this contributes to the missing mass problem.
I am sorry I never got around to replying to your comment however I have just seen this article https://www.quantamagazine.org/physicists-use-quantum-mechanics-to-pull-energy-out-of-nothing-20230222/#comments and it makes interesting reading and may be relevant to what you say.
Duncan
Duncan Hale-SuttonParticipantOn a slightly more serious note, and writing as a non-physicist, there seems to me something rather suspect about saying ‘Our model works – but only if we invent entities x and y.’ Might it not be an alternative to say, ‘The anomalies are so great that it looks as if our model doesn’t work. Perhaps it’s time to re-examine fundamental axioms and build a new model.’ Could we perhaps be heading for an exciting ‘paradigm-shift’?
Alan – agreed. The problem is that the standard model of cosmology works very well and it matches observations from very early times until now. It means that people are reluctant to accept a paradigm shift until there is overwhelming evidence to the contrary. It is like having a black box which predicts results for an experiment. If it keeps predicting the right results you might not be tempted to get your screwdriver out and fiddle with its innards in case you mess it up. Interestingly there are a few things on the horizon that may cause concern. One is the so called “Hubble Tension” (see for example https://www.scientificamerican.com/article/hubble-tension-headache-clashing-measurements-make-the-universes-expansion-a-lingering-mystery/) and the other would be if the James Webb Space Telescope starts seeing lots of well-formed galaxies at very high redshifts (see, for example, https://www.scientificamerican.com/article/jwsts-first-glimpses-of-early-galaxies-could-break-cosmology/)
- This reply was modified 1 year, 10 months ago by Duncan Hale-Sutton.
Duncan Hale-SuttonParticipantHi Adam. I have been messing about with some of the equations from that paper I mentioned above for the last few days. I thought, perhaps, that if the mass loss from the universe could be written into omega (matter) in equation (1) as something like omega (matter) = -kt + c for some time interval between, say, t1 and t2 (constant k>0 and c to be determined), then we could substitute this into equation (1) with omega (lamda) = 0 (i.e. no dark energy component but instead the matter density is decreasing constantly with time, that is the universe is losing mass). I found, though, that this doesn’t work and so it kind of blows my idea out of the water! The issue is that the second time derivative of the expansion factor d^2 a(t) / dt^2 < 0 which means that even though the universe continues to expand it does so at an ever decreasing rate. Of course, for the model with lamda the change in the rate of expansion goes from decreasing to zero to increasing.
I hope this makes sense.
Duncan Hale-SuttonParticipantA fascinating discussion – and I am looking forward to the conclusion as I try to lose weight every year around this time and would like some tips on how to do better in future!
Hi Alan. I would like to help you with your weight-loss plan, however, I think dark matter decay may not help you with your regime. I don’t think there is really enough of it in each of us to be helpful. This article https://www.universetoday.com/15266/dark-matter-is-denser-in-the-solar-system seems to imply that there might only be about 0.0018% mass of the earth in the whole solar system and I reckon, though I haven’t calculated what that equates to per meter cubed, that this doesn’t amount to much in each of us. Perhaps, a more effective way would to lose weight would be to cut your nails as did Tony in an episode of Men Behaving Badly that I recall.
Duncan.
PS Sorry about the edits – having trouble with my bbcodes!
- This reply was modified 1 year, 10 months ago by Duncan Hale-Sutton.
- This reply was modified 1 year, 10 months ago by Duncan Hale-Sutton.
Duncan Hale-SuttonParticipantFor the so-called ‘benchmark universe’, matter and lambda, the scale factor, a(t), is proportional to (sinh(t))^(2/3).
Hi Adam. Thanks for taking this on and replying to my wacky ideas! To be honest I am trying to catch up with where you are with your understanding. I didn’t realize that in the matter dominated era in which we are now the scale factor a(t) for a lamda-CDM model could be so neatly expressed as a sinh(t) function. I found this paper (pdf) on the internet which I found helpful. I think their equation (5) is what you are referring to. What a beautifully elegant result! I always assumed that a model with lamda would be difficult to compute. So yes, as you say, this would be the benchmark to compare to. It has two nice asymptotic forms. When H0 t is << 1 the form is as in equation (6). The lamda term becomes negligible and a(t) is proportional to (t)^(2/3) which is like the Einstein-de Sitter model. When H0 t is >> 1 the form is as in equation (8) where the lamda term dominates and a(t) is proportional to exp(k t) where k is a constant which is like the de Sitter expansion model. I hope I have got this right!
I am going to split my replies due to issues of losing my posts when editing them.
Duncan Hale-SuttonParticipantThanks for the suggestion Bill. I apologize that my use of the words “mass loss” is close to the term that is often used for stars. Stars, as you say, do lose mass all the time through nuclear burning but also through the stellar winds and evolutionary scenarios. Unfortunately, as Paul says, all that material and heat will just dissipate into another part of the universe and radiation and matter still contribute to the Universe’s gravity through Einstein’s general relativity theory. My idea of mass loss is a bit bonkers really. All of our notions of matter/energy are that this is a conserved quantity. You can’t just magic it away. However, in my defense I would say that we know very little about what the properties of dark matter are and so who is to say what it does. Also, in the past, theorists have had models where they consider the creation of matter as in the steady-state theory of Bondi, Gold and Hoyle (1948). So, if you can consider creating it, why not consider destroying it? Mine idea would be a big bang theory with reverse steady-state!
10 January 2023 at 12:09 am in reply to: Possible visibility of Virgin Orbit launch from the UK on January 9th #615065Duncan Hale-SuttonParticipantUnfortunately it looks like the rocket has failed and the payload did not reach orbit.
9 January 2023 at 11:40 pm in reply to: Possible visibility of Virgin Orbit launch from the UK on January 9th #615064Duncan Hale-SuttonParticipantClear skies here in Norfolk (10 miles from Norwich on the north side). Watched from drop of rocket until 11.30 UT. Nothing seen (but then I thought the chances were low).
Duncan.
Duncan Hale-SuttonParticipantAnother look at this star this evening. I was comparing it to stars E (=7.3) and H (=7.8) on chart 312.02. I definitely think its brightness is between these two stars and a bit closer to E than to H. My estimate was E(1)V(2)H that is magnitude 7.5 (7×50 bins, 18:55 UT). Interestingly the AAVSO plot for this star is showing some quite varied estimates recently even between CCD observations so I am not quite sure what is going on here. What are other people seeing?
Duncan
- This reply was modified 1 year, 11 months ago by Duncan Hale-Sutton.
Duncan Hale-SuttonParticipantAgreed, it was a very enjoyable meeting and social afterwards. Many thanks to the BAA and to those who contributed and organised it.
Duncan.
Duncan Hale-SuttonParticipantMy visual observation tonight at 18.50 UT put it at about magnitude 7.6.
Duncan.
Duncan Hale-SuttonParticipantThanks for looking into this Andy. I will let you know via this thread if I see a problem again.
Cheers, Duncan.
Duncan Hale-SuttonParticipantSorry Steve, my forum post is missing again as soon as I made a small edit to it! Here it is (again):-
Hi Steve,
the problem with my original missing post was kindly sorted (it had been marked as spam!) and it now appears in the thread above.
I have had a look at the situation where the sun and the moon are different apparent sizes and I don’t think the situation is too bad. You have to take into consideration that the two “lens” shaped areas either side of the chord PP’ in my diagram are not the same size, but the formulae for their areas have similar forms. I think that the percentage obscuration in this instance is given by (50/pi)(w – sin w + f^2(W – sin W)). As before w is the angle subtended (in radians) by the chord of length l at the centre of the sun. However, now there is also an angle W subtended by the chord at the centre of the moon. You can see that the w – sin w part is the same as before, but now there is an additional part that comes from the area of the “lens” to the right of the chord which is governed by the shape of the moon. The factor f is the ratio of the moon’s diameter to that of the sun’s. Now as before w = 2 arcsin(l/d) where l is the length of the chord and d is the diameter of the sun. But we also have that W = 2 arcsin ((1/f)(l/d)). I hope this make sense.
I now calculate that if f=0.99253 (assuming that the moon was smaller than the sun at the time of your measurement) then the measured obscuration is now 18.08 +/- 0.13. So it does make a small difference. I guess that this formula works up until W or w is less than or equal to pi. I haven’t figured out what happens otherwise!
Duncan
Duncan Hale-SuttonParticipantHi Steve,
the problem with my missing post was kindly sorted (it had been marked as spam!) and it now appears in the thread above.
I have had a look at the situation where the sun and the moon are different apparent sizes and I don’t think the situation is too bad. You have to take into consideration that the two “lens” shaped areas either side of the chord PP’ in my diagram are not the same size, but the formulae for their areas have similar forms. I think that the percentage obscuration in this instance is given by (50/pi)(w – sin w + f^2(W – sin W)). As before w is the angle subtended (in radians) by the chord of length l at the centre of the sun. However, now there is also an angle W subtended by the chord at the centre of the moon. You can see that the w – sin w part is the same as before, but now there is an additional part that comes from the area of the “lens” to the right of the chord which is governed by the shape of the moon. The factor f is the ratio of the moon’s diameter to that of the sun’s. Now as before w = 2 arcsin(l/d) where l is the length of the chord and d is the diameter of the sun. But we also have that W = 2 arcsin ((1/f)(l/d)). I hope this make sense.
I now calculate that if f=0.99253 (assuming that the moon was smaller than the sun at the time of your measurement) then the measured obscuration is now 18.08 +/- 0.13. So it does make a small difference. I guess that this formula works up until W or w is less than or equal to pi. I haven’t figured out what happens otherwise!
Duncan
- This reply was modified 1 year, 11 months ago by Duncan Hale-Sutton.
Duncan Hale-SuttonParticipantHi Steve, there was a problem with the system. I created a post and then made a couple of small edits and then it disappeared! I was hoping it could be resurrected but I think it has gone into the ether.
The short summary of what I said was this. I agree about the errors. Taking your results, the ratio 847/1199 gives an upper limit for the angle w and the ratio 845/1201 gives a lower limit. Taking these to be limits of the error we get for your measurement of the obscuration and magnitude 17.98 +/- 0.13 and 0.2908 +/- 0.0014 respectively. My conclusion is that we would have to work a bit harder to get these errors down if we were going to see any deviation from the predicted values of these quantities (which would be interesting).
Duncan.
Duncan Hale-SuttonParticipantHi again Steve,
I completely agree with you with what you said about the errors. The bit I meant to calculate was the error on the estimate of the obscuration and magnitude. If we take your +/- 1 pixel example, the largest value of the angle subtended by the chord w is derived from the ratio 847/1199 which corresponds to w=1.568860462 radians. The smallest angle is derived from the ratio 845/1201 which corresponds to w=1.56084678 radians. So the upper and lower limits to the obscuration percentage are 18.11% and 17.85% respectively, which roughly means that your observed value is 17.98 +/- 0.13 (compared to the predicted value of 17.96%). With respect to the magnitude the upper and lower limits are 0.2922 and 0.2894 respectively, which means that your observed value is 0.2908 +/- 0.0014 (compared to the predicted value of 0.2910). So it seems we would have to work a bit harder in accuracy before we could start to determine if the observed value really differs from the prediction (and this is what would be of interest).
Duncan.
- This reply was modified 1 year, 12 months ago by Duncan Hale-Sutton.
- This reply was modified 1 year, 12 months ago by Duncan Hale-Sutton.
Duncan Hale-SuttonParticipantHi Steve.
Wow, that’s great. It is nice to see that you could get a measurement for the magnitude and the obscuration which so nicely agrees with the prediction. Very satisfying. As regards my formula I am pleased that you could verify it. I too slogged through some trig but I spotted this simplified form of it at the end. I must work out what roughly the errors would be on these values given that you could measure things to within a pixel or two. I haven’t actually checked the assumption that the moon and the sun were approximately the same apparent diameter for this particular eclipse. Oh, actually, on that diagram of the eclipse in the handbook it has the apparent semi-diameters of the sun and moon at greatest eclipse (S.D.=…). Taking these to be the numbers we need it would imply that the moon was slightly smaller than the sun?
Thanks also for explaining about the eclipse diagrams in the handbook. Yes, that link you gave to that NASA page does definitely help! I will keep a note of it.
I also liked your two videos of the eclipse, especially the second one which shows that the path of the eclipse is a straight line. Very clever.
Duncan.
Duncan Hale-SuttonParticipantHi Alex. Thanks very much for replying to my query. All very interesting.
I must admit that I understood eclipse obscuration (or percentage coverage) better than I did magnitude as you do tend to see predictions for these more in articles in a run up to a partial eclipse. I did look at the BAA handbook and the observer’s challenge beforehand. One thing I could point out as a relative newcomer to all this is that I didn’t understand what the blue line predictions were in the handbook as there didn’t seem to be a key (that I could see, unless perhaps, someone could point me to where it is)! I see now the numbers on the blue lines are probably magnitudes rather than maximum obscuration (I have just learnt something new in that magnitudes are quoted for maximum coverage). On the next page, though, the numbers are given as max obscuration (so even the handbook can’t decide what to display!). However, I do agree with you that magnitudes may be better because they are linear rather than depending on area. I must say that quite a lot of the other numbers quoted on page 13 of the 2022 handbook don’t mean much to me.
I think that what trying to measure the percentage obscuration did for me was to really make me understand what these terms meant and the bit of maths required to calculate the area of intersection of two identical circles was enlightening (no one has yet told me whether what I quoted was right or wrong yet!). Clearly it is more complicated if the sun and the moon are not the same apparent diameter but the same rules apply.
I guess what I am trying to say, perhaps, is that even though the mechanics of the earth moon system is very well known and calculating magnitudes or obscurations from our observations is not going to improve that knowledge, for those of us on the learning curve it is still very instructive!
Thanks for the link to the article. I will have a read of this at some point.
Duncan.
- This reply was modified 2 years ago by Duncan Hale-Sutton.
-
AuthorPosts