Only maybe in the dense core of a galaxy because incident radiation falls off with the square of distance.
The nearest star to us after the Sun is ~4ly away, or ~250k AU. The Sun would have to be ~63 billion times brighter to give the same incident radiation at 250k AU, and that is just a typical distance between stars in our neighborhood . The Sun is also brighter than the average star, especially the older stars that congregate near the galactic center.
Galaxies can easily have 1 trillion stars but they are usually so spread out as to make this impractical. This is also why the Milky Way, Triangulum, LMC, SMC, and Andromeda (nearest galaxies) are so faint to the naked eye.
"in the center of the galaxy, stars are only 0.4–0.04 light-years apart"
The most luminous stars going by Wikipedia are about 5 million times brighter than the Sun. Not sure if those are anywhere near galactic centres though.
Nobody who had to work with 6509's/7609's at an ISP is shedding a tear over this.
Someone (Richard Steenbergen?) once made a joke that we should take the last 6509/7609 and launch it into orbit to celebrate.
It's not that they weren't popular. At one point in the mid 2000's they appeared to make up about 1/3 of major internet routers (if you looked around a carrier hotel). This was due to their extremely low cost compared to actual high end routers. While they had serious limitations and were notoriously sensitive to "IOS roulette", somehow you could just make them work.
The 7600 was an absolutely idiotic product. The 6500 was, for the time, fine as an enterprise Ethernet switch (much more capable obviously once the sup 2 with fabric services module and sup 720 with integrated crossbar came along,) but using it as a ISP router, especially where you were taking a full routing table? That was just stupid.
For anyone reading this that doesn't have experience with these things, when the parent commenter talks about "just making them work," one failure mode among many in these devices is that packet forwarding is primarily done in hardware, more or less at line rate. But, if you enable an IOS feature that isn't supported in hardware, it gets processed in software. In more "ISP-focused" routers, it is common to just not support features that aren't implemented in hardware. Forwarding performance on these platforms goes from almost 500 million packets per second in hardware (in certain highly specific and very unlikely scenarios) to around 40 - 50 thousand packets per second -- absolute best-case -- in software. Another failure mode specifically applicable to the ISP scenario is the fixed hardware forwarding table size, which for many models was 192k IPv4 prefixes. could you have a larger forwarding table size? Absolutely. In software.
I remember Hale-Bopp in 1997. At it's peak it was so bright you could easily see it from inside a brightly lit restaurant, looking out a window 20 feet away.
> n 1976, while Van Flandern was employed by the USNO, he began to promote the belief that major planets sometimes explode.[30] Van Flandern also speculated that the origin of the human species may well have been on the planet Mars, which he believed was once a moon of a now-exploded "Planet V".
> Mistake it may well be, but the fact remains that this sense of the word is in widespread use today, and may be found often enough in well regarded and highly edited, publications.
I would say you are most likely to find this usage of that word in well regarded and highly edited, publications.
I just had an idea that may be my worst technology idea ever.
Assemble some unstable atoms (that decay into carbon) into the desired cubic structure. When they decay you have a diamond.
The problem with this is that if it can decay fast enough (even with outside neutrons) it will be too hot (pun intended), and if it decays slowly enough it will take too long. Depending on the source isotopes and process it could also result in a radioactive diamond! Also, the heat of the process would have to not change the crystal structure.
However, some day when we master quarks and the weak interaction we might be able to do this quickly and safely.
Hmm; so the only thing that can "easily" decay into the stable forms of carbon - C12 and C13 that is - is N13 (β+ to C13 with "minutes" half-life). Nothing decays into C12, since N12 or O12 would have half-lifes so short as to make them "doubtful" isotopes.
But Nitrogen wouldn't crystallise in a diamond lattice; nevermind the crystal absorbing "heat" from the radioactive decay disturbing positions temporarily, there's just no way to arrange Nitrogen and Carbon atoms into similar locations of a crystal lattice. This sort of "transmutation" isn't even science fiction, it's only a dream
(follow your dreams but think a few times before trying to make money off them)
Leaving aside the decay part of things, carbon makes a crystal structure of a diamond, other materials don't. So they would refuse to assemble into the right shape.
What we need is sensors that can scan polarization on a per-pixel basis (like 256 orientations per pixel per image. Then it would be much easier to detect and remove consistently polarized components of the image (as specular reflections from glass are).
This would just be a fully electronic/computational version of a mechanical polarizing filter.
You only need 4 parameters to describe the polarization at a single wavelength[1]. Naively this could be 4 parameters per color channel, so 12 channels overall. I think you could potentially need more color channels though to capture the full spectrum. But 12 channels at least looks feasible for a camera.
On second thought for dealing with reflections you might get away not capturing the "V" Stokes parameter, as you might not care about circular polarization.
edit2:
The I,Q and U parameters can be captured fully by a single polarization filter at three different rotations. This could be feasible with existing cameras with a tripod and a static subject. I wonder if this has been done before.
You can buy [1] polarization cameras, both mono and with a Bayer filter. They're expensive right now, but I agree it would be really cool to see what could be done with a consumer grade version in a smart phone.
Interesting. From what I can find the pixel format is 4 polarization directions per pixel, 45 degrees apart. Even though there are 4 channels this doesn't allow to deduce the V Stokes parameter (this camera can't capture circular polarization). Technically one channel is redundant here, but I guess it can be useful for reducing error.
I wonder if an alternative pixel format, with 3 polarization directions 60 degrees apart and a circular polarization channel would be desirable for some applications.
I'm pretty sure he means a single byte-valued parameter. As you mention a single parameter is not enough to fully describe the polarization but maybe it's good enough - I guess you would average across colors, and say circular polarization would lead to a basically random value.
I did indeed mean a singe, byte-valued parameter indicating angle (similar to the single angle parameter of a mechanical polarizing filter)
Full polarization and phase info would be great to have also but probably not necessary for reflection suppression. And yes purely circular polarization would be undefined in this scenario but again not common (possible?) with reflections.
Due to quantum physics, there's actually only two degrees of freedom in the ways light can be polarized, referred to as the "Jones Vector". In other words, it's impossible even in theory to distinguish between light that has exactly two perpendicular polarizations mixed together and light that is fully unpolarized and has thousands located all around the circle. That makes it surprisingly possible to build a camera that captures _everything_ there is to know about light at some particular frequency.
Not quite — that’s for polarized light. For general light that may be unpolarized, you need four parameters. You can use the Stokes parameters, or, if you’re feeling very quantum, you can describe the full polarization state of a photon by a 2x2 density matrix. (I have never personally calculated this, but I’m pretty sure you can straightforwardly translate one formulation to the other — the density matrix captures the polarization distribution of a photon sampled, by whatever means, from any source of incoming light.)
> [...] remove consistently polarized components of the image (as specular reflections from glass are).
It was my understanding that reflections in glass can be either polarized or non-polarized, or a mix of both.
If you use a polarizing filter on a camera (e.g. when taking photos of artwork through glass, or shooting over water that you want to see into), you will often find that it does not remove all reflections.
Yes, because my mom* is not going to carry that around to take pics of the grandkids.
Just because something exists does not mean it is practical. I can totally see how having a software solution that Apple can include in its fakeypics app, then my mom would be able to take advantage of this.
Apple could request a sensor with the polarsens mask. It’s just not worth it, from a resolution & light gathering perspective. Big tradeoffs improvements in specific scenarios is not a path Apple has taken typically for their cameras.
Apple is not going to make a hardware change like your suggestion, but they would be much more likely use the software concept from TFA. I'm assuming that Googs, Samsung, CCPhardware would be similar. They need to do something compelling with all of the specific compute they are including in their devices.
Unfortunately, nothing can remove the temperature of the atmosphere (which affects infrared imaging), or the absorption of many wavelength bands.