View allAll Photos Tagged calculation
Car: Nissan 300ZX.
Date of first registration: 2nd March 1987.
Registration region: South East London.
Tax expired on 1st September 1999.
By my calculation, there are no more than 200 of this generation 300ZX remaining in the UK.
Date taken: 4th January 2019.
Album: Abandoned Cars
Although the female was in perfect position to make the catch, the horizontal flight of the starling threw of her calculations. This frame shows how quick and acrobatic peregrines are, as she tries to adapt to the changing dynmaics.
The Nikon F90X (N90s in the U.S. market) was the second and final version of what I call the third generation of semi-pro 35mm autofocus SLRs from Nikon. The second generation, the F801/F801s (N8008/N8008s), introduced reliable autofocus, together with built-in autowind and rewind, spot metering, a new single control wheel interface, and other features, to the semi-pro line. (Note: an even earlier Nikon autofocus design, the F501 (N2020), had early first generation autofocus capability and a completely different interface and viewfinder display.) The F90/F90x continued virtually the same interface and body design as the F801/F801s but upgraded the level of technology, especially in its final incarnation, the F90X. The F90X is the epitome of Nikon's single focus point autofocus film SLRs. The F90X's interface was changed and enhanced with the subsequent F100, which introduced dual control wheels. The F100 also adds multiple focus and spot metering points, together with support for modern vibration reduction/image stabilization (VR) lenses and slightly more flexible custom settings. While the F100 is in some ways a better film camera to use today than the F90X, the F90X already included 3D Matrix Metering, which is the biggest exposure metering advance in the industry until the later color matrix metering of the F5 and F6 (and subsequent digital SLRs). The F90X also supports the built-in Silent Wave motors of modern Nikon lenses. In spite of its technical advances over the F90X, the F100 has an even more severe problem than the F90X with decomposition of the rubbery surface of its camera backs. If you don't need support for VR and are looking for a low-cost high-tech AF body in the used market, the F90X could be just what you want.
With the big picture out of the way, let's look at the features and functions of the F90X in more detail. The original F90 appears to have been rushed out in 1992 to quickly upgrade the F801s as a way to compete with Canon in a rapidly developing market. However, the F90X was released less than two years later with a long list of major and minor refinements. Today, you would definitely want the F90X over the original F90. The biggest improvements in the F90X over the F90 were improved autofocus, and the ability to adjust P, S, and M modes in 1/3 stop increments rather than one stop increments. According to Nikon, both the F90 and F90X used the CAM246 AF detection system, so AF improvement from the F90 to F90X was presumably due to better software.
The F90X is an amazing camera. The F800 already felt very advanced, moving from an F3HP, when the F801 was released in 1988. But after upgrading from the F801 to the F90X, one really appreciated the more responsive autofocus, the addition of spot metering (I never moved to an F801s), and most of all 3D Matrix (multi-pattern) metering. The 3D matrix metering of the F90X enabled more accurate exposure metering, especially for flash photography with dedicated Nikon electronic flashes, by incorporating subject focus distance information from AF-D lenses into the exposure calculation. The F90X is optimized for use with AF-D lenses, either Nikkor lenses or from third-party manufacturers such as Sigma and Tamron. The F90X works with non-autofocus Ai lenses, but with such lenses, you can only use center-weighted and spot metering (no matrix metering) and you can only use the Aperture Priority and Manual exposure modes (no Program or Shutter Speed Priority modes). Also, the set aperture of non-AF lenses does not appear in the viewfinder since there is no optical ADR like on most earlier Nikon bodies. The F90X (unlike the previous F801/F801s) auto-focuses with later G-type lenses (with no aperture ring and Silent Wave focusing motor). While you cannot adjust the aperture of G-type lenses directly on the F90X, a very simple solution is to shoot in Program mode, but use Flexible Program by turning the control wheel to step through equivalent aperture/shutter speed combinations while keeping the EV fixed. (This works a bit like exposure lock on electronic Contax bodies.)
The size and weight of the F90X is a very reasonable 755g, especially by the standard of the F4 or F5. It is an incremental increase in size and weight over the F801/s. In addition, although the user manual only indicates four AA-type alkaline, manganese or NiCd batteries, both Nikon and my personal experience confirms that the F90X also works fine with relatively lightweight AA lithium batteries. Overall, the camera/battery combination, even with alkaline batteries, is perfect both for stability and also portability. I have personally only used alkaline and lithium batteries and both types last for a very long time, although of course not as long as the button batteries in older manual focus cameras, such as the F3 or FM2N. The F90X owner manual indicates a battery capacity of 50 rolls of 36-exposure film at 20 degrees Celsius. This is much more than the battery capacity of my F6, especially with a power-hungry VR lens attached to the F6. The F90X has a very solid and comfortable feel, with a metal interior, matte-type rubberized grip surfaces, and heavy-duty matte composite plastic exterior plates. The F90/X body design includes a molded hand grip that is very stable and comfortable, but does not excessively add to the dimensions of the body. The covering material on the F90/X is not as rubbery or tactile as newer Nikon bodies, but still offers a fine grip.
There was a well-known problem with the rubberized material on the exterior of the camera back. A few years ago, the rubberized material on on at least some samples of the F90/X started to decompose and become a sticky goo. The same problem happened to the back of my own F90X, which became extremely sticky and completely unusable. Fortunately, my camera tech was able to procure a new replacement back and make the camera like new. The new back, like the old, is indeed plastic except for the pressure plate and other hardware. Still, the construction of the new back is very solid and it fits snugly onto the camera body when closed, without any irritating play. I am not sure about the composition of the exterior surface of the new back. It is a very attractive matte black finish, that must either be some type of composite material, or a very fine sprayed on layer. In any event, it appears to be very durable and hopefully long-lasting.
With the F90X generation of Nikon bodies, if you want to adjust certain functions, such as auto exposure bracketing, multiple exposure operation, interval timer, film imprinting, etc., you will need to add a MF-26 Multi-Control back. You can also use the optional Data Link System and AC-2E card to adjust more settings, download stored data, etc.
The viewfinder of the F90X has a relatively low 92% image coverage. Such coverage is more appropriate for the era when people used mounted slides, which cut off the edges of the frame, but is more limiting in today's age when film is scanned directly after processing at the lab. On the other hand, you can crop the scanned images in Photoshop if necessary. You just need to keep in mind at the time of shooting that your image will include a bit more than you can see. The viewfinder display is well-organized, and the brightness of the soft green horizontal LCD display is just right for both bright and dim environments. The viewfinder eyepiece does not include an adjustable diopter. However, Nikon still makes a full range of single diopter lenses in current production. The F90X takes the same diopter lens as the F3HP, F801/S, F90 and F100.
Dual film advance modes of single frame and continuous High ( 4.3 fps) and Low (2.0) speed, offer more than enough speed for casual shooting.
As mentioned above, the exposure metering system of the F90X is extremely advanced. It has second generation software and three additional central segments in addition to the five metering segments of the FA and F801/s (plus spot), for a total of 8 segments. In addition, the new "3D" technology of the F90x increases the accuracy of the multi-segment metering system even further, especially for flash photography, with concurrent and later AF-D compatible lenses. Center-weighted metering is of course included, and is designed with a 75% weighting, which had become the new Nikon standard for center-weight, more like the 80% center-weight of the F3 than the 60% weight of classic Nikon camera meters. The 3mm spot meter had become standard since the earlier F801s. One of my few complaints about the design of the F90X is that the selected exposure metering system is not displayed in the viewfinder, unlike on many later models. Glasses wearers will prefer the selected exposure metering system to be indicated in the viewfinder so they can switch among the metering systems without putting your glasses on, especially since the control wheel interface makes it difficult to confirm the selected system by feel alone. The LCD display on the top of the camera duplicates much of the same information as the viewfinder display, plus additional information such as metering system and ISO. The exposure meter is very sensitive, covering EV -1 through EV 21 in matrix and center-weighted, and EV 3 to EV 21 for spot metering.
Although the F90X only has a single focus area, autofocusing is quite responsive. Of course, the focusing technique in the day of the single autofocus point was to focus on the appropriate object, lock the focus with the shutter release button or AF lock, recompose, and shoot. The focus indicator also works very well with most manual focus lenses (with greater than f/5.6 aperture). Just focus manually until the round digital in-focus indicator is displayed in the viewfinder; there is no need for a central focusing aid on the focusing screen, although you can also manually focus with the matte screen itself. The central focus area can be easily switched between Spot and Wide by pushing a button on the top right of the camera and turning the control wheel; which area you have selected shows up in the viewfinder display so you can switch back and forth with your eye to the viewfinder. The Wide autofocus area is actually quite large, covering more than half of the outer central circle of the viewfinder image. The autofocus system appears quite adept at following moving subjects that stay within this expanded focusing area.
The F90X has all of the required PASM exposure modes and then some. The camera adds Ps "Vari-Program" modes that automatically set the recommended shutter speed and aperture combinations for seven separate photographic situations, such as Portraits, Portraits with Red Eye Reduction, Landscape, Sports, Close Up, etc. However, anyone who properly knows their way around a camera, or wants to learn, has no need for these Vari-Program settings. Unlike exposure metering systems, it is extremely easy to adjust your exposure modes with your eye to the viewfinder; just push the Mode button on the top left of the camera and turn the control wheel to select the correct mode. The selected mode is always clearly indicated at the bottom of the viewfinder. Program mode is extremely useful, even for photographers who are expert at manual camera setting. In a pinch, the camera's Program mode can adjust exposure fully automatically. More commonly, however, it is convenient to let the camera select the correct EV and the approximate shutter speed/aperture combination in Program mode. You then simply turn the control wheel in Program Mode after you have metered the scene to select the exact shutter speed/aperture combination that you want in 1/3 stop increments. This technique becomes even more useful when using newer G-type lenses, which have no aperture ring, since you have no secondary control dial to change the aperture directly. Manual mode works very well with AF lenses that have aperture rings; the digital analog readout in the viewfinder indicates exposure deviation in 1/3 stop increments, although, unlike some later designs, it only displays the range of +-1 EV to save display real estate. Although you can't see exactly how far you you are when greater than +- 1 EV, I never found this to be a practical limitation.
The F90X has a highly advanced shutter. Using the control wheel, you can directly set the shutter speed in 1/3 stop increments all the way from 1/8000 sec. to 30 seconds. Standard electronic flash maximum synch speed is a modern 1/250 sec.
Exposure compensation of +- 5 EV is easy to set by pushing a button on top of the camera and turning the control wheel, with the amount of compensation visible in the viewfinder display in 1/3 stop increments. The camera also has an AE-Lock lever on the back for your right thumb. I find that I usually prefer to use Manual Mode rather than the AE-Lock lever.
One small advantage of the F90X as a fully electronic camera is its ability to easily set the self-timer delay between 2 to 30 seconds. Just push the appropriate button on the top left of the camera and turn the control wheel to set. It can be used as an alternative to a remove control cable. On the other hand, one disadvantage of fully electronic cameras, such as the F90X, is that you need to use a special electronic remote cord (MC-20) (rather than a standard mechanical cable) when taking Bulb exposures. Also, Bulb exposures can wear down the battery. Better to use a fully mechanical body if you plan to take lots of long Bulb exposures. (On the other hand, the MC-20 can automatically close the shutter after up to 99 hours, 59 minutes, 59 seconds! The MF-20 back, and apparently the AC-2E card, also provides this function.)
The F90X has easily interchangeable focusing screens. The single available optional screen adds a grid.
The MB-10 Multi-Power Vertical Grip is available if you need more battery longevity and bigger camera grip with vertical shutter release.
As already mentioned, with its 3D technology, the F90X is amazing for flash photography. If you know what you are doing, you can get great flash photographs even with non-TTL mechanical bodies. But TTL flash control is much more convenient, and the F90/x enhances the level of TTL matrix flash metering by, for the first time, incorporating focus distance from the lens into the exposure calculation. Just make sure to use AF-D type lenses or better, and one of the compatible Nikon electronic flash units. The most advanced concurrent flash with the F90X was the SB-28, which allows full use of the F90X's flash exposure features. The F90X also supports monitor pre-flash with the SB-25/26/28. (Later flashes, such as current production modern Nikon flashes, also work fully with the F90X. I usually use a current production SB-800 flash, even on the F90X.) Another nice feature of the F90X's flash technology is 3D Multi-Sensor Balanced Fill Flash. This function automatically reduces the output of the flash to supplement ambient light. Of course, is you want more precise control, fill-flash can also be set with appropriate manual flash negative compensation directly on the flash unit. Other flash features available with the F90X/SB-28 combination is Rear-Curtain Synch for motion photography and red eye reduction.
One generally unnecessary feature that has been omitted from the F90X is mirror lock-up (MLU).
To conclude, the F90X was and continues to be an amazing camera. The F90/X were the first semi-pro Nikon to incorporate 3D metering technology. It felt fun and was effective to use the camera with AF-D or newer lenses. The F90X offered virtually every function that you could think of, at least with its various accessories. The camera feels great in your hands and has a good form factor and weight for both travel and large lenses. Really the only limitation that irritates me about the F90X is that I can not use Matrix Metering, and there is no viewfinder display of the selected aperture, with manual focus lenses. In practice, this should not impact the quality of your images, but it is certainly less convenient. (Thankfully, this limitation was finally fixed in the F6 and some high-end Nikon DSLRs). The real limitations of the F90X today are its single focus point, its lack of support for VR lenses, and its lack of a second control wheel for G-type lenses. Thus, in the film world, you would need an F100 or F6 (and the consumer grade F75/F80) to get maximum benefit out of the newest generation of lenses. The control wheels and rubbery grip are more ergonomic on the F100 and F6, compared with the F90X.
Copyright © 2016 Timothy A. Rogers. All rights reserved.
(DSC_6109fin2)
A quantity introduced in the first place to facilitate the calculation, and to give clear expressions to the results of thermodynamics. Changes of entropy can be calculated only for a reversible process, and may then be defined as the ratio of the amount of heat taken up to the absolute temperature at which the heat is absorbed. Entropy changes for actual irreversible processes are calculated by postulating equivalent theoretical reversible changes. The entropy of a system is a measure of its degree of disorder. The total entropy of any isolated system can never decrease in any change; it must either increase (irreversible process) or remain constant (reversible process). The total entropy of the Universe therefore is increasing, tending towards a maximum, corresponding to complete disorder of the particles in it (assuming that it may be regarded as an isolated system.) (Pamela Zoline)
臺灣首廟天壇 - 一字匾 / 千算萬算不如天之一畫
Tian Tan,Tian Gong Temple - One word tablet / Thousands of calculations are not as good as the ones of the god
Tian Tan, Templo de Tian Gong - Tableta de una palabra / Miles de cálculos no son tan buenos como los del dios
台湾の第一廟の天壇 - 1字の額 / 千は万が計算して日の中の一つの絵に及ばないをの計算します
Tian Tan, Tian Gong Tempel - Ein Wort Tablette / Tausende von Berechnungen sind nicht so gut wie die des Gottes
Tian Tan, temple Tian Gong - Tablette à un mot / Des milliers de calculs ne sont pas aussi bons que ceux du dieu
Tainan Taiwan / Tainan Taiwán / 台灣台南
管樂小集 2017/07/01 台南孔子廟 Confucian temple Tainan performances 1080P
{View large size on fluidr / 觀看大圖}
{My Blog / 管樂小集精彩演出-觸動你的心}
{My Blog / Great Music The splendid performance touches your heart}
{My Blog / 管楽小集すばらしい公演-はあなたの心を心を打ちます}
{Mi blog / La gran música el funcionamiento espléndido toca su corazón}
{Mein Blog / Große Musik die herrliche Leistung berührt Ihr Herz}
{Mon blog / La grande musique l'exécution splendide touche votre coeur}
Melody 曲:JAPAN / Words 詞:Sheesen / Singing : Sheesen
{ 夢旅人 1990 Dream Traveler 1990 }
家住安南鹽溪邊
The family lives in nearby the Annan salt river
隔壁就是聽雨軒
The next door listens to the rain porch
一旦落日照大員
The sunset Shineing to the Taiwan at once
左岸青龍飛九天
The left bank white dragon flying in the sky
©JoyGerow
Year after year we who have the privilege of seeing the beauty that autumn brings also can hear the symphony.
Some texture by flickr.com/photos/liek/sets/72157604633907740/
Brushes by Obsidian Dawn
If you were born on February 29, 1924 you would be 100 years old today.
BUT you would be only 25 years old in LEAP DAY YEARS!
This view looks at one end of a trial portion of Babbage's Difference Engine No. 1, which is one of the earliest automatic calculators and a celebrated icon in the pre-history of the computer.
Charles Babbage was a brilliant thinker and mathematician who devised the Difference Engine to automate the production of error-free mathematical tables. In 1823 he secured £1,500 from the British government and employed an engineer to construct the device. However, the project collapsed in 1833 when the engineer left the project.
By that time the government had spent £17,000 on the project, the equivalent then of the cost of two major warships. Recent research has shown that the engineer's work was adequate to create a functioning machine and that the project actually collapsed because of economics, politics, Babbage's temperament and his style of directing the enterprise.
After the attempt at making the first difference engine fell through, Babbage worked to design a more complex machine called the Analytical Engine. The Analytical Engine marks the transition from mechanised arithmetic to fully-fledged general-purpose computation. It is largely on it that Babbage's standing as computer pioneer rests.
Ada Lovelace (Augusta Ada King-Noel, Countess of Lovelace (née Byron; 10 December 1815-27 November 1852)) was the first to recognise that the Analytical Engine had applications beyond pure calculation, and published the first algorithm (to calculate Bernoulli numbers) intended to be carried out by such a machine. As a result, she is often regarded as the first to recognise the full potential of a "computing machine" and the first computer programmer.
Seen in the Science Museum, London.
If my calculations are correct, when this baby hits 88 miles per hour... you're gonna see some serious shit.
~ Dr. Emmett Brown
Back From the Shadows Again:
BTTF Delorean shot at the Hollywood Car Museum Las Vegas:
These two GJs were captured by the Gemini Observatory cloudcam on July 24, 2017. The reference stars I used for the calculations are circled. The code I used for the calculations was sent to me by Jozsef Bor.
According to scientific calculations and assumptions, Stone Wedding started forming 40 million years ago. At that time the territory of Eastern Rhodopes today was located at the bottom of the warm, shallow sea. There the relief continuously changed under the influence of active volcanic activity.
The name, Stone Wedding is given for several rock formations in which a good imagination can be assimilated. On the natural wonder of the Stone Wedding there is a beautiful legend, which tells of an unhappy love affair.
The story tells of a young man who loved a girl from a neighboring village. He was enchanted by her lovely eyes that were blue as the sky.......
----
Copyright © 2011 Slavina Bahchevanova
"Let your beauty manifest itself/ without talking and calculation./ You are silent. It says for you: I am./ And comes in meaning thousandfold,/ comes at long last over everyone." ~Rilke
Yesterday Frank opened three-double-yolked eggs in row. What are the odds?
I don't know. I dropped college statistics because my beret-wearing professor at Northwestern decided he'd raise the learning curve by requiring that we all use slide rules to do all calculations. Yes, it was the 70s, but electronic calculators suitable for statistics had been around for a while. What a jerk.
What are the odds that I'd discover an online article about the odds of getting three twin-yolked eggs in a row? Let's stipulate they're incalculable and move on to the story.
Six Yolks from Three Eggs: What Are the Odds of That?
Making breakfast one recent morning, a colleague cracked an egg and BOOM! A double yolk came out. He cracked another. SHAZAM! And a third, BOOYAH!
Three double yolks in a row, what are the odds of that?
It turns out, that number is easy to calculate: In general, one out of every thousand eggs is a double, which would calculate the odds at 1,000 x 1,000 x 1,000, or one in a billion. Before we all went out and splurged on a pack of scratch tickets, however, we delved a little deeper.
It turns out that doubles turn out more frequently among young hens than older birds, and that flocks of hens tend to be the same age. The chance of a young hen laying a double-yolked egg are roughly 1:30. So, three in a row would calculate the odds at one in 27,000.
So which is, one in a billion or one in 27,000? You can see how, if this were a business case, whichever assumption is in error could have enormous implications on your financial modeling, response rate forecasts or whatever you are trying to predict.
On reflection, both of these calculations seem to me to be erroneous. If, in general, one gets a double every 1,000, then it strikes me the odds of the first egg being a double is 1:1000. Once you get one, assuming the "all eggs in the carton came from a flock of young hens" rule, the chances of a second would be 1:30 and the third would be another 1:30. So, our best guess of the odds of getting three in a row are 1,000 x 30 x 30, which is one in 900,000.
Why is this important? Because probability, statistics and math are hugely important to so many of our business processes. And more often than you might think, folks distort the assumptions that go into statistical modeling for a wide range of reasons, be they political, emotional, or to align conclusions with an expectation of the outcome.
Science and math by their nature seem factual, to be taken at face value. We need to remember that most scientific modeling comes with built-in assumptions, and these must also be factual or the whole conclusion can be suspect.
In this small example, we might have pointed to the 1:1,000,000,000 odds to make the point that something truly extraordinary had just happened.
Our friendly competitor might want to tear us down a notch and challenge our assumptions, coming up with the 1:27,000 number.
Yet, a balanced calculation results in a more nuanced conclusion, with the most accurate prediction of the likelihood of a six-yolk breakfast from the crack of three eggs being somewhere just short a million to one.
If you want to challenge an argument, find the assumptions that are used to justify it and delve a little deeper. You'll be surprised at what you might find, and how your predictors might become more accurate.
Update: The next morning, he cracked three morning and got three more doubles! Now that's extraordinary!
www.hollandlitho.com/3_straight_double-yolk_eggs_what_is_...
Note: This calculation is incorrect. Frank was using jumbo eggs, and experience suggests the likelihood of finding a double yolk is 1:15. If we use the formula the author used above, the odds are 1:225,000.
photo rights reserved by Ben
The Ekolari Market in Stepantsminda, Georgia, is a small, local shop where locals and travelers can buy groceries and traditional Georgian products. In an increasingly digital world, one thing stands out: the woman at the cash register still uses an abacus to calculate amounts. This adds a nostalgic and authentic touch to the shopping experience and reflects how some traditional methods are still used in remote areas. Such markets play an important role in the community, not only as a place of trade but also as a social meeting place. Local products such as fresh bread, cheese, honey and herbs can often be found here, along with basic necessities for daily life.
An abacus, also known as a counting frame, is one of the oldest calculating tools in the world. It consists of rows of beads that can be moved along rods to perform addition, subtraction, and even multiplication and division. While it has been largely replaced by digital calculators and computers in many parts of the world, it is still used in some regions, especially in small markets, traditional shops, and by older generations who grew up using it. In countries like Georgia, Russia, and China, the use of an abacus is still occasionally seen, particularly among small traders who find it faster and more reliable than an electronic cash register. It requires skill and practice, and experienced users can calculate incredibly fast. The fact that a woman in the Ekolari Market in Stepantsminda still uses an abacus offers a charming glimpse into how traditional methods continue to play a role in daily life, even in an increasingly digital world.
De Ekolari Market in Stepantsminda, Georgië, is een kleine, lokale winkel waar inwoners en reizigers terecht kunnen voor dagelijkse boodschappen en traditionele Georgische producten. In een wereld die steeds digitaler wordt, valt hier iets bijzonders op: de vrouw bij de kassa gebruikt nog een telraam om bedragen te berekenen. Dit geeft een nostalgisch en authentiek tintje aan de winkelervaring en weerspiegelt hoe sommige traditionele methoden in afgelegen gebieden nog steeds worden gebruikt. Dergelijke markten spelen een belangrijke rol in de gemeenschap, niet alleen als handelsplek maar ook als sociale ontmoetingsplaats. Lokale producten zoals vers brood, kaas, honing en kruiden zijn hier vaak te vinden, samen met basisbenodigdheden voor het dagelijks leven.
According the first calculations before 2007 the building should cost about 77 million €. By 2013 the cost for the taxpayer amounted to 789 million €. The project should have been finshed years ago, but it still is under construction.
"Did I ever tell you about the time I stumbled on a pre-colonization Life ship? If you want to hear it then you're buying the next round of drinks. Okay, so me and Gerrick and the old gang are drug running in the Far Rim (you know that fuck-off lawless shitehole in the outer galaxy?) and we're avoiding all the main trade routes in case we get picked up by rival privateers or worse - ITO. Anyway, we staying way off the beaten track and suddenly our scanners reveal this object in our way. The boys reckon it's just a destroyed fuel depot or something but we disengage from Warp Drive to check it out just in case. As we pull up in front of it, two things click in my mind. First of all, this fucking thing was big. Not as big as a destroyer or Dreadnought but certainly larger than your average run-of-the-mill space junk. Secondly, it has all these strange acronyms and markings on it like "NASA-CNSA". That's when we knew it was old-ass tech. We boarded it to find out more.
And that's where things got real creepy. Through the airlock was, like, this long corridor. It went on into the dark in either direction because only a few of the lights actually worked. I take Allers, Johanne and Crazy Dave down the left hand corridor; Gerrick takes the others down the right. As we walk, our torches keep passing over these tube-like machines. Sort of like cryostasis pods but more crude. So far they've all been empty or misted up or whatever but as we reach this one pod, I can make out a shape in it. We get near to it and what we see makes me totally freak the fuck out. A human corpse. Like, not a skeleton. Just a partially decayed body. Now, I've seen some shit during my time with the Red Fists but nothing this... eerie. The poor bastard is still in his space suit. And the thing looks brand new, not a trace of damage anywhere. On the chest it had "NASA" printed in large text, and underneath:
"Life Ship Geneva - Mission Commander".
______________________________________
In case you don't know, the Wait Calculation is the idea that "continued growth will inhibit setting out for the stars because travelers expect to be overtaken by later travelers who have faster speeds at their disposal".
With the fuel problem resolved, Soundwave found that he could take the necessary time to focus on his other tasks. He was surprised to discover that his super computer would soon, relatively speaking, solve its wormhole-travel equation. The computer actually made the necessary calculations before Soundwave had cracked the code to his upgraded body, a fact that both pleased and frustrated the Decepticon at the same time. This forced Soundwave to shift priorities, for his super-computer was programmed to initiate the wormhole protocol as soon as the equation had been resolved. Soundwave's new body would have to wait.
The journey through the wormholes came at a higher energon expense that Soundwave had expected. This high fuel expense led to further frustration felt by the Decepticon Communicator as he regularly had to stop his ship as it exited a wormhole. Time after time, Soundwave would exit the wormhole, then exit the ship and transform to his satellite alternate mode to refuel. These frequent stops were cause for alarm due to the nature of the travel. At each stop, Soundwave was extremely cautious, scanning nearby space for signs of the Vok or worse, signs of the last of the Time Lords. Soundwave did not consider travelling through wormholes to be a method of time-travel per-say, it was merely a shortcut between two points in space. However, Soundwave was not feeling completely confident with his own appraisal of the method of his travels and felt the need to ere on the side of caution.
After a considerably longer period of time and a good deal more effort than he had expected, Soundwave eventually arrived at his destination: Cybertron. After exiting the final wormhole of his journey, Soundwave intentionally came to rest beyond the reach detection from the Cybertron's early warning systems. It was here that Soundwave chose to wait, to analyze data gathered from his sensors on the activities from Cybertron's surface and to continue work on the construction of his upgraded body. To further ensure that his vessel was not detected, Soundwave initiated his ship's stealth protocol, a function lifted directly from the schematics of his own comrade, the Decepticon Saboteur, Ravage.
It does not do to leave a live dragon out of your calculations, if you live near him.
J. R. R. Tolkien
Borgund Stave Church (Norwegian: Borgund stavkyrkje) is a former parish church of the Church of Norway in Lærdal Municipality in Vestland county, Norway. It was built around the year 1200 as the village church of Borgund, and belonged to Lærdal parish (part of the Sogn prosti (deanery) in the Diocese of Bjørgvin) until 1868, when its religious functions were transferred to a "new" Borgund Church, which was built nearby. The old church was restored, conserved and turned into a museum. It is funded and run by the Society for the Preservation of Ancient Norwegian Monuments, and is classified as a triple-nave stave church of the Sogn-type. Its grounds contain Norway's sole surviving stave-built free-standing bell tower.
Borgund Stave Church was built sometime between 1180 and 1250 AD with later additions and restorations. Its walls are formed by vertical wooden boards, or staves, hence the name "stave church." The four corner posts are connected to one another by ground sills, resting on a stone foundation. The intervening staves rise from the ground sills; each is tongued and grooved, to interlock with its neighbours and form a sturdy wall. The exterior timber surfaces are darkened by protective layers of tar, distilled from pine.
Borgund is built on a basilica plan, with reduced side aisles, and an added chancel and apse. It has a raised central nave demarcated on four sides by an arcade. An ambulatory runs around this platform and into the chancel and apse, both added in the 14th century. An additional ambulatory, in the form of a porch, runs around the exterior of the building, sheltered under the overhanging shingled roof. The floor plan of this church resembles that of a central plan, double-shelled Greek cross with an apse attached to one end in place of the fourth arm. The entries to the church are in the three shorter arms of the cross.
Structurally, the building has been described as a "cube within a cube", each independent of the other. The inner "cube" is formed by continuous columns that rise from ground level to support the roof. The top of the arcade is formed by arched buttresses, knee jointed to the columns. Above the arcade, the columns are linked by cross-shaped, diagonal trusses, commonly dubbed "Saint Andrew's crosses"; these carry arched supports that offer the visual equivalent of a "second storey". While not a functional gallery, this is reminiscent of contemporary second story galleries of large stone churches elsewhere in Europe. Smaller beams running between these upper supporting columns help clamp everything firmly together. The weight of the roof is thus supported by buttresses and columns, preventing downward and outward movement of the stave walls.
The roof beams are supported by steeply angled scissor trusses that form an "X" shape with a narrow top span and a broader bottom span, tied by a bottom truss to prevent collapse. Additional support is given by a truss that cuts across the "X", below the crossing point but above the bottom truss. The roof is steeply pitched, boarded horizontally and clad with shingles. The original outer roof would have been weatherproofed with boards laid lengthwise, rather than shingles. In later years wooden shingles became more common. Scissor beam roof construction is typical of most stave churches.
Borgund has tiered, overhanging roofs, topped at their intersection by a shingle-roofed tower or steeple. On each of its four gables is a stylised "dragon" head, swooping from the carved roof ridge crests, Hohler remarks their similarity to the carved dragon heads found on the prows of Norse ships. Similar gable heads appear on small bronze church-shaped reliquaries common in Norway and Europe in this period. Borgund's current dragon heads are possible 18th century replacements; similar, original dragon heads remain on older structures, such as Lom Stave Church and nearby Urnes Stave Church. Borgund is one of the only stave churches to have preserved its crested ridge caps. They are carved with openwork vine and entangled plant designs.
The four outer dragon heads are perhaps the most distinctive of all non-Christian symbols adorning Borgund Stave Church. Their function is uncertain, and disputed; if pagan, they are recruited to the Christian cause in the battle between Good and Evil. They may have been intended to keep away evil spirits thought to threaten the church building; to ward off evil, rather than represent it,
On the lower side panel of the steeple are four carved circular cutouts. The carvings are weather-beaten, tarred and difficult to decipher, and there is disagreement about what they symbolize. Some[who?] believe they represent the four evangelists, symbolised by an eagle, an ox, a lion and a man. Hauglid describes the carvings as "dragons that extend their heads over to the neighboring field's dragon and bite into it", and points out their similarity to carvings at Høre Stave Church.
The church's west portal (the nave's main entrance), is surrounded by a larger carving of dragons biting each other in the neck and tail. At the bottom of the half-columns that flank the front entrance, two dragon heads spew vine stalks that wind upwards and are braided into the dragons above. The carving shares similarities with the west portal of Ål Stave Church, which also has kites[clarification needed] in a band braiding pattern, and follows the usual composition[clarification needed] in the Sogn-Valdres portals, a larger group of portals with very clear similarities. Bugge writes that Christian authority may have come to terms with such pagan and "wild scenes" in the church building because the rift could be interpreted as a struggle between good and evil; in Christian medieval art, the dragon was often used as a symbol of the devil himself but Bugge believes that the carvings were protective, like the dragon heads on the church roof.
The church interior is dark, as not much daylight enters the building. Some of the few sources of natural light are narrow circular windows along the roof, examples of daylighting. It was supposed that the narrow apertures would prevent the entry of evil spirits. Three entrances are heavily adorned with foliage and snakes, and are only wide enough for one person to enter, supposedly preventing the entry of evil spirits alongside the churchgoers. The portals were originally painted green, red, black, and white.
Most of the internal fittings have been removed. There is little in the building, apart from the row of benches that are installed along the wall inside the church in the ambulatory outside of the arcade and raised platform, a soapstone font, an altar (with 17th-century altarpiece), a 16th-century lectern, and a 16th-century cupboard for storing altar vessels. After the Reformation, when the church was converted for Protestant worship, pews, a pulpit and other standard church furnishings were included, however these have been removed since the building has come under the protection of the Fortidsminneforeningen (The Society for the Preservation of Norwegian Ancient Monuments).
The interior structure of the church is characterized by the twelve free-standing columns that support the nave's elevated central space. On the long side of the church there is a double interval between the second and third pillars, but with a half pillar resting on the lower bracing beam (the pier) which runs in between. The double interval provides free access from the south portal to the church's central compartment, which would otherwise have been obstructed by the middle bar. The tops of the poles are finished with grotesque, carved human and animal masks. The tie-bars are secured with braces in the form of St. Andrew's crosses with a sun - shaped center and carved leaf shapes along the arms. The crosses reappear in less ornate form as braces along the church walls. On the north and south sides of the nave, a total of eight windows let in small amounts of light, and at the top of the nave's west gable is a window of more recent date - probably from pre-Reformation times. On the south wall of the nave, the inauguration crosses are still on the inside of the wall. The interior choir walls and west portal have engraved figures and runes, some of which date to the Middle Ages. One, among the commonest of runic graffiti, reads "Ave Maria". An inscription by Þórir (Thor), written "in the evening at St. Olav's Mass" blames the pagan Norns for his problems; perhaps a residue of ancient beliefs, as these female beings were thought to rule the personal destinies of all in Norse mythology and the Poetic Edda.
The medieval interior of the stave church is almost untouched, save for its restorations and repairs, though the medieval crucifix was removed after the Reformation. The original wooden floor and the benches that run along the walls of the nave are largely intact, together with a medieval stone altar and a box-shaped baptismal font in soapstone. The pulpit is from the period 1550–1570 and the altarpiece dates from 1654, while the frame around the tablet is dated to 1620. The painting on the altarpiece shows the crucifixion in the centre, flanked by the Virgin Mary on the left and John the Baptist on the right. In the tympanum field, a white dove hovers on a blue background. Below the painting is an inscription with golden letters on a black background. A sacrament from the period 1550–1570 in the same style as the pulpit is also preserved. A restoration of the building was carried out in the early 1870s, led by the architect Christian Christie, who removed benches, a second-floor gallery with seating, a ceiling over the chancel, and various windows including two large windows on the north and south sides. As the goal was to return the church to pre-Reformation condition, all post-Reformation interior paintwork was also removed.
Images from the 1990s show deer antlers hung on the lower, east-facing pillars. A local story claims that this is all that remains of a whole stuffed reindeer, shot when it tried to enter during a Mass. A travelogue from 1668 claims that a reindeer was shot during a sermon "when it marched like a wizard in front of the other animal carcasses"
To the south of the church is a free-standing stave-work bell tower that covers remnants of the mediaeval foundry used to cast the church bell. It was probably built in the mid-13th century. It is Norway's only remaining free-standing stave-work bell tower.It was given a new door around the year 1700 but this was removed and not replaced at some time between the 1920s and 1940s, leaving the foundry pit was exposed. To preserve the interior, new walls were built as cladding on the outside of the stave walls in the 1990s. One of the medieval bells is on display in the new Borgund church.
Management
In 1868 the building was abandoned as a church but was turned into a museum; this saved it from the commonplace demolition of stave churches in that period. A new Borgund Church was built in 1868 a short distance south of the old church. The old church has not been formally used for religious purposes since that year. Borgund Stave Church was bought by the Society for the Preservation of Ancient Norwegian Monuments in 1877. The first guidebook in English for the stave church was published in 1898. From 2001, the Norwegian Directorate for Cultural Heritage has funded a program to research, restore, conserve and maintain stave churches.
Legacy
The church served as an example for the reconstruction of the Fantoft Stave Church in Fana, Bergen, in 1883 and for its rebuilding in 1997. The Gustav Adolf Stave Church in Hahnenklee, Germany, built in 1908, is modeled on the Borgund church. Four replicas exist in the United States, one at Chapel in the Hills, Rapid City, South Dakota, another in Lyme, Connecticut, the third on Washington Island, Wisconsin, and the fourth in Minot, North Dakota at the Scandinavian Heritage Park.
Borgund is a former municipality in Sogn og Fjordane county, Norway. It was located in the southeastern part of the traditional district of Sogn. The 635-square-kilometre (245 sq mi) municipality existed from 1864 until its dissolution in 1964. It encompassed an area in the eastern part of the present-day Lærdal Municipality. The administrative center of Borgund was the village of Steinklepp, just northeast of the village of Borgund. Steinklepp was the site of a store, a bank, and a school. The historical Filefjell Kongevegen road passes through the Borgund area.
Location
The former municipality of Borgund was situated near the southeastern end of the Sognefjorden, along the Lærdalselvi river. The lower parts of the municipality were farms such as Sjurhaugen and Nedrehegg. They were at an elevation of about 270 m (890 ft) above sea level. Høgeloft, on the border with the neighboring municipality of Hemsedal, is a mountain in the Filefjell range and it was the highest point in Borgund at 1,920 m (6,300 ft) above sea level. The lakes Eldrevatnet, Juklevatnet, and Øljusjøen were also located near the border with Hemsedal.
History
Borgund was established as a municipality in 1864 when it was separated from the municipality of Lærdal. Initially it had a population of 963. During the 1960s, there were many municipal mergers across Norway due to the work of the Schei Committee. On 1 January 1964, the municipality of Borgund (population: 492) was merged with the Muggeteigen area (population: 11) of the neighboring Årdal Municipality and all of Lærdal Municipality (population: 1,755) were all merged to form a new, larger municipality of Lærdal
Norway , officially the Kingdom of Norway , is a Nordic , European country and an independent state in the west of the Scandinavian Peninsula . Geographically speaking, the country is long and narrow, and on the elongated coast towards the North Atlantic are Norway's well-known fjords . The Kingdom of Norway includes the main country (the mainland with adjacent islands within the baseline ), Jan Mayen and Svalbard . With these two Arctic areas, Norway covers a land area of 385,000 km² and has a population of approximately 5.5 million (2023). Mainland Norway borders Sweden in the east , Finland and Russia in the northeast .
Norway is a parliamentary democracy and constitutional monarchy , where Harald V has been king and head of state since 1991 , and Jonas Gahr Støre ( Ap ) has been prime minister since 2021 . Norway is a unitary state , with two administrative levels below the state: counties and municipalities . The Sami part of the population has, through the Sami Parliament and the Finnmark Act , to a certain extent self-government and influence over traditionally Sami areas. Although Norway has rejected membership of the European Union through two referendums , through the EEA Agreement Norway has close ties with the Union, and through NATO with the United States . Norway is a significant contributor to the United Nations (UN), and has participated with soldiers in several foreign operations mandated by the UN. Norway is among the states that have participated from the founding of the UN , NATO , the Council of Europe , the OSCE and the Nordic Council , and in addition to these is a member of the EEA , the World Trade Organization , the Organization for Economic Co-operation and Development and is part of the Schengen area .
Norway is rich in many natural resources such as oil , gas , minerals , timber , seafood , fresh water and hydropower . Since the beginning of the 20th century, these natural conditions have given the country the opportunity for an increase in wealth that few other countries can now enjoy, and Norwegians have the second highest average income in the world, measured in GDP per capita, as of 2022. The petroleum industry accounts for around 14% of Norway's gross domestic product as of 2018. Norway is the world's largest producer of oil and gas per capita outside the Middle East. However, the number of employees linked to this industry fell from approx. 232,000 in 2013 to 207,000 in 2015.
In Norway, these natural resources have been managed for socially beneficial purposes. The country maintains a welfare model in line with the other Nordic countries. Important service areas such as health and higher education are state-funded, and the country has an extensive welfare system for its citizens. Public expenditure in 2018 is approx. 50% of GDP, and the majority of these expenses are related to education, healthcare, social security and welfare. Since 2001 and until 2021, when the country took second place, the UN has ranked Norway as the world's best country to live in . From 2010, Norway is also ranked at the top of the EIU's democracy index . Norway ranks third on the UN's World Happiness Report for the years 2016–2018, behind Finland and Denmark , a report published in March 2019.
The majority of the population is Nordic. In the last couple of years, immigration has accounted for more than half of population growth. The five largest minority groups are Norwegian-Poles , Lithuanians , Norwegian-Swedes , Norwegian-Syrians including Syrian Kurds and Norwegian-Pakistani .
Norway's national day is 17 May, on this day in 1814 the Norwegian Constitution was dated and signed by the presidency of the National Assembly at Eidsvoll . It is stipulated in the law of 26 April 1947 that 17 May are national public holidays. The Sami national day is 6 February. "Yes, we love this country" is Norway's national anthem, the song was written in 1859 by Bjørnstjerne Bjørnson (1832–1910).
Norway's history of human settlement goes back at least 10,000 years, to the Late Paleolithic , the first period of the Stone Age . Archaeological finds of settlements along the entire Norwegian coast have so far been dated back to 10,400 before present (BP), the oldest find is today considered to be a settlement at Pauler in Brunlanes , Vestfold .
For a period these settlements were considered to be the remains of settlers from Doggerland , an area which today lies beneath the North Sea , but which was once a land bridge connecting today's British Isles with Danish Jutland . But the archaeologists who study the initial phase of the settlement in what is today Norway reckon that the first people who came here followed the coast along what is today Bohuslân. That they arrived in some form of boat is absolutely certain, and there is much evidence that they could easily move over large distances.
Since the last Ice Age, there has been continuous settlement in Norway. It cannot be ruled out that people lived in Norway during the interglacial period , but no trace of such a population or settlement has been found.
The Stone Age lasted a long time; half of the time that our country has been populated. There are no written accounts of what life was like back then. The knowledge we have has been painstakingly collected through investigations of places where people have stayed and left behind objects that we can understand have been processed by human hands. This field of knowledge is called archaeology . The archaeologists interpret their findings and the history of the surrounding landscape. In our country, the uplift after the Ice Age is fundamental. The history of the settlements at Pauler is no more than fifteen years old.
The Fosna culture settled parts of Norway sometime between 10,000–8,000 BC. (see Stone Age in Norway ). The dating of rock carvings is set to Neolithic times (in Norway between 4000 BC to 1700 BC) and show activities typical of hunters and gatherers .
Agriculture with livestock and arable farming was introduced in the Neolithic. Swad farming where the farmers move when the field does not produce the expected yield.
More permanent and persistent farm settlements developed in the Bronze Age (1700 BC to 500 BC) and the Iron Age . The earliest runes have been found on an arrowhead dated to around 200 BC. Many more inscriptions are dated to around 800, and a number of petty kingdoms developed during these centuries. In prehistoric times, there were no fixed national borders in the Nordic countries and Norway did not exist as a state. The population in Norway probably fell to year 0.
Events in this time period, the centuries before the year 1000, are glimpsed in written sources. Although the sagas were written down in the 13th century, many hundreds of years later, they provide a glimpse into what was already a distant past. The story of the fimbul winter gives us a historical picture of something that happened and which in our time, with the help of dendrochronology , can be interpreted as a natural disaster in the year 536, created by a volcanic eruption in El Salvador .
In the period between 800 and 1066 there was a significant expansion and it is referred to as the Viking Age . During this period, Norwegians, as Swedes and Danes also did, traveled abroad in longships with sails as explorers, traders, settlers and as Vikings (raiders and pirates ). By the middle of the 11th century, the Norwegian kingship had been firmly established, building its right as descendants of Harald Hårfagre and then as heirs of Olav the Holy . The Norwegian kings, and their subjects, now professed Christianity . In the time around Håkon Håkonsson , in the time after the civil war , there was a small renaissance in Norway with extensive literary activity and diplomatic activity with Europe. The black dew came to Norway in 1349 and killed around half of the population. The entire state apparatus and Norway then entered a period of decline.
Between 1396 and 1536, Norway was part of the Kalmar Union , and from 1536 until 1814 Norway had been reduced to a tributary part of Denmark , named as the Personal Union of Denmark-Norway . This staff union entered into an alliance with Napoléon Bonaparte with a war that brought bad times and famine in 1812 . In 1814, Denmark-Norway lost the Anglophone Wars , part of the Napoleonic Wars , and the Danish king was forced to cede Norway to the king of Sweden in the Treaty of Kiel on 14 January of that year. After a Norwegian attempt at independence, Norway was forced into a loose union with Sweden, but where Norway was allowed to create its own constitution, the Constitution of 1814 . In this period, Norwegian, romantic national feeling flourished, and the Norwegians tried to develop and establish their own national self-worth. The union with Sweden was broken in 1905 after it had been threatened with war, and Norway became an independent kingdom with its own monarch, Haakon VII .
Norway remained neutral during the First World War , and at the outbreak of the Second World War, Norway again declared itself neutral, but was invaded by National Socialist Germany on 9 April 1940 .
Norway became a member of the Western defense alliance NATO in 1949 . Two attempts to join the EU were voted down in referendums by small margins in 1972 and 1994 . Norway has been a close ally of the United States in the post-war period. Large discoveries of oil and natural gas in the North Sea at the end of the 1960s led to tremendous economic growth in the country, which is still ongoing. Traditional industries such as fishing are also part of Norway's economy.
Stone Age (before 1700 BC)
When most of the ice disappeared, vegetation spread over the landscape and due to a warm climate around 2000-3000 BC. the forest grew much taller than in modern times. Land uplift after the ice age led to a number of fjords becoming lakes and dry land. The first people probably came from the south along the coast of the Kattegat and overland into Finnmark from the east. The first people probably lived by gathering, hunting and trapping. A good number of Stone Age settlements have been found which show that such hunting and trapping people stayed for a long time in the same place or returned to the same place regularly. Large amounts of gnawed bones show that they lived on, among other things, reindeer, elk, small game and fish.
Flintstone was imported from Denmark and apart from small natural deposits along the southern coast, all flintstone in Norway is transported by people. At Espevær, greenstone was quarried for tools in the Stone Age, and greenstone tools from Espevær have been found over large parts of Western Norway. Around 2000-3000 BC the usual farm animals such as cows and sheep were introduced to Norway. Livestock probably meant a fundamental change in society in that part of the people had to be permanent residents or live a semi-nomadic life. Livestock farming may also have led to conflict with hunters.
The oldest traces of people in what is today Norway have been found at Pauler , a farm in Brunlanes in Larvik municipality in Vestfold . In 2007 and 2008, the farm has given its name to a number of Stone Age settlements that have been excavated and examined by archaeologists from the Cultural History Museum at UiO. The investigations have been carried out in connection with the new route for the E18 motorway west of Farris. The oldest settlement, located more than 127 m above sea level, is dated to be about 10,400 years old (uncalibrated, more than 11,000 years in real calendar years). From here, the ice sheet was perhaps visible when people settled here. This locality has been named Pauler I, and is today considered to be the oldest confirmed human traces in Norway to date. The place is in the mountains above the Pauler tunnel on the E18 between Larvik and Porsgrunn . The pioneer settlement is a term archaeologists have adopted for the oldest settlement. The archaeologists have speculated about where they came from, the first people in what is today Norway. It has been suggested that they could come by boat or perhaps across the ice from Doggerland or the North Sea, but there is now a large consensus that they came north along what is today the Bohuslän coast. The Fosna culture , the Komsa culture and the Nøstvet culture are the traditional terms for hunting cultures from the Stone Age. One thing is certain - getting to the water was something they mastered, the first people in our country. Therefore, within a short time they were able to use our entire long coast.
In the New Stone Age (4000 BC–1700 BC) there is a theory that a new people immigrated to the country, the so-called Stone Ax People . Rock carvings from this period show motifs from hunting and fishing , which were still important industries. From this period, a megalithic tomb has been found in Østfold .
It is uncertain whether there were organized societies or state-like associations in the Stone Age in Norway. Findings from settlements indicate that many lived together and that this was probably more than one family so that it was a slightly larger, organized herd.
Finnmark
In prehistoric times, animal husbandry and agriculture were of little economic importance in Finnmark. Livelihoods in Finnmark were mainly based on fish, gathering, hunting and trapping, and eventually domestic reindeer herding became widespread in the Middle Ages. Archaeological finds from the Stone Age have been referred to as the Komsa culture and comprise around 5,000 years of settlement. Finnmark probably got its first settlement around 8000 BC. It is believed that the coastal areas became ice-free 11,000 years BC and the fjord areas around 9,000 years BC. after which willows, grass, heather, birch and pine came into being. Finnmarksvidda was covered by pine forest around 6000 BC. After the Ice Age, the land rose around 80 meters in the inner fjord areas (Alta, Tana, Varanger). Due to ice melting in the polar region, the sea rose in the period 6400–3800 BC. and in areas with little land elevation, some settlements from the first part of the Stone Age were flooded. On Sørøya, the net sea level rise was 12 to 14 meters and many residential areas were flooded.
According to Bjørnar Olsen , there are many indications of a connection between the oldest settlement in Western Norway (the " Fosnakulturen ") and that in Finnmark, but it is uncertain in which direction the settlement took place. In the earliest part of the Stone Age, settlement in Finnmark was probably concentrated in the coastal areas, and these only reflected a lifestyle with great mobility and no permanent dwellings. The inner regions, such as Pasvik, were probably used seasonally. The archaeologically proven settlements from the Stone Age in inner Finnmark and Troms are linked to lakes and large watercourses. The oldest petroglyphs in Alta are usually dated to 4200 BC, that is, the Neolithic . Bjørnar Olsen believes that the oldest can be up to 2,000 years older than this.
From around 4000 BC a slow deforestation of Finnmark began and around 1800 BC the vegetation distribution was roughly the same as in modern times. The change in vegetation may have increased the distance between the reindeer's summer and winter grazing. The uplift continued slowly from around 4000 BC. at the same time as sea level rise stopped.
According to Gutorm Gjessing, the settlement in Finnmark and large parts of northern Norway in the Neolithic was semi-nomadic with movement between four seasonal settlements (following the pattern of life in Sami siida in historical times): On the outer coast in summer (fishing and seal catching) and inland in winter (hunting for reindeer, elk and bear). Povl Simonsen believed instead that the winter residence was in the inner fjord area in a village-like sod house settlement. Bjørnar Olsen believes that at the end of the Stone Age there was a relatively settled population along the coast, while inland there was less settlement and a more mobile lifestyle.
Bronze Age (1700 BC–500 BC)
Bronze was used for tools in Norway from around 1500 BC. Bronze is a mixture of tin and copper , and these metals were introduced because they were not mined in the country at the time. Bronze is believed to have been a relatively expensive material. The Bronze Age in Norway can be divided into two phases:
Early Bronze Age (1700–1100 BC)
Younger Bronze Age (1100–500 BC)
For the prehistoric (unwritten) era, there is limited knowledge about social conditions and possible state formations. From the Bronze Age, there are large burial mounds of stone piles along the coast of Vestfold and Agder, among others. It is likely that only chieftains or other great men could erect such grave monuments and there was probably some form of organized society linked to these. In the Bronze Age, society was more organized and stratified than in the Stone Age. Then a rich class of chieftains emerged who had close connections with southern Scandinavia. The settlements became more permanent and people adopted horses and ard . They acquired bronze status symbols, lived in longhouses and people were buried in large burial mounds . Petroglyphs from the Bronze Age indicate that humans practiced solar cultivation.
Finnmark
In the last millennium BC the climate became cooler and the pine forest disappears from the coast; pine forests, for example, were only found in the innermost part of the Altafjord, while the outer coast was almost treeless. Around the year 0, the limit for birch forest was south of Kirkenes. Animals with forest habitats (elk, bear and beaver) disappeared and the reindeer probably established their annual migration routes sometime at that time. In the period 1800–900 BC there were significantly more settlements in and utilization of the hinterland was particularly noticeable on Finnmarksvidda. From around 1800 BC until year 0 there was a significant increase in contact between Finnmark and areas in the east including Karelia (where metals were produced including copper) and central and eastern Russia. The youngest petroglyphs in Alta show far more boats than the earlier phases and the boats are reminiscent of types depicted in petroglyphs in southern Scandinavia. It is unclear what influence southern Scandinavian societies had as far north as Alta before the year 0. Many of the cultural features that are considered typical Sami in modern times were created or consolidated in the last millennium BC, this applies, among other things, to the custom of burying in brick chambers in stone urns. The Mortensnes burial ground may have been used for 2000 years until around 1600 AD.
Iron Age (c. 500 BC–c. 1050 AD)
The Einangsteinen is one of the oldest Norwegian runestones; it is from the 4th century
Simultaneous production of Vikings
Around 500 years BC the researchers reckon that the Bronze Age will be replaced by the Iron Age as iron takes over as the most important material for weapons and tools. Bronze, wood and stone were still used. Iron was cheaper than bronze, easier to work than flint , and could be used for many purposes; iron probably became common property. Iron could, among other things, be used to make solid and sharp axes which made it much easier to fell trees. In the Iron Age, gold and silver were also used partly for decoration and partly as means of payment. It is unknown which language was used in Norway before our era. From around the year 0 until around the year 800, everyone in Scandinavia (except the Sami) spoke Old Norse , a North Germanic language. Subsequently, several different languages developed in this area that were only partially mutually intelligible. The Iron Age is divided into several periods:
Early Iron Age
Pre-Roman Iron Age (c. 500 BC–c. 0)
Roman Iron Age (c. 0–c. AD 400)
Migration period (approx. 400–600). In the migration period (approx. 400–600), new peoples came to Norway, and ruins of fortress buildings etc. are interpreted as signs that there has been talk of a violent invasion.
Younger Iron Age
Merovingian period (500–800)
The Viking Age (793–1066)
Norwegian Vikings go on plundering expeditions and trade voyages around the coastal countries of Western Europe . Large groups of Norwegians emigrate to the British Isles , Iceland and Greenland . Harald Hårfagre starts a unification process of Norway late in the 8th century , which was completed by Harald Hardråde in the 1060s . The country was Christianized under the kings Olav Tryggvason , fell in the battle of Svolder ( 1000 ) and Olav Haraldsson (the saint), fell in the battle of Stiklestad in 1030 .
Sources of prehistoric times
Shrinking glaciers in the high mountains, including in Jotunheimen and Breheimen , have from around the year 2000 uncovered objects from the Viking Age and earlier. These are objects of organic material that have been preserved by the ice and that elsewhere in nature are broken down in a few months. The finds are getting older as the melting makes the archaeologists go deeper into the ice. About half of all archaeological discoveries on glaciers in the world are made in Oppland . In 2013, a 3,400-year-old shoe and a robe from the year 300 were found. Finds at Lomseggen in Lom published in 2020 revealed, among other things, well-preserved horseshoes used on a mountain pass. Many hundreds of items include preserved clothing, knives, whisks, mittens, leather shoes, wooden chests and horse equipment. A piece of cloth dated to the year 1000 has preserved its original colour. In 2014, a wooden ski from around the year 700 was found in Reinheimen . The ski is 172 cm long and 14 cm wide, with preserved binding of leather and wicker.
Pytheas from Massalia is the oldest known account of what was probably the coast of Norway, perhaps somewhere on the coast of Møre. Pytheas visited Britannia around 325 BC. and traveled further north to a country by the "Ice Sea". Pytheas described the short summer night and the midnight sun farther north. He wrote, among other things, that people there made a drink from grain and honey. Caesar wrote in his work about the Gallic campaign about the Germanic tribe Haruders. Other Roman sources around the year 0 mention the land of the Cimbri (Jutland) and the Cimbri headlands ( Skagen ) and that the sources stated that Cimbri and Charyds lived in this area. Some of these peoples may have immigrated to Norway and there become known as hordes (as in Hordaland). Sources from the Mediterranean area referred to the islands of Scandia, Scandinavia and Thule ("the outermost of all islands"). The Roman historian Tacitus wrote around the year 100 a work about Germania and mentioned the people of Scandia, the Sviones. Ptolemy wrote around the year 150 that the Kharudes (Hordes) lived further north than all the Cimbri, in the north lived the Finnoi (Finns or Sami) and in the south the Gutai (Goths). The Nordic countries and Norway were outside the Roman Empire , which dominated Europe at the time. The Gothic-born historian Jordanes wrote in the 5th century about 13 tribes or people groups in Norway, including raumaricii (probably Romerike ), ragnaricii ( Ranrike ) and finni or skretefinni (skrid finner or ski finner, i.e. Sami) as well as a number of unclear groups. Prokopios wrote at the same time about Thule north of the land of the Danes and Slavs, Thule was ten times as big as Britannia and the largest of all the islands. In Thule, the sun was up 40 days straight in the summer. After the migration period , southern Europeans' accounts of northern Europe became fuller and more reliable.
Settlement in prehistoric times
Norway has around 50,000 farms with their own names. Farm names have persisted for a long time, over 1000 years, perhaps as much as 2000 years. The name researchers have arranged different types of farm names chronologically, which provides a basis for determining when the place was used by people or received a permanent settlement. Uncompounded landscape names such as Haug, Eid, Vik and Berg are believed to be the oldest. Archaeological traces indicate that some areas have been inhabited earlier than assumed from the farm name. Burial mounds also indicate permanent settlement. For example, the burial ground at Svartelva in Løten was used from around the year 0 to the year 1000 when Christianity took over. The first farmers probably used large areas for inland and outland, and new farms were probably established based on some "mother farms". Names such as By (or Bø) show that it is an old place of residence. From the older Iron Age, names with -heim (a common Germanic word meaning place of residence) and -stad tell of settlement, while -vin and -land tell of the use of the place. Farm names in -heim are often found as -um , -eim or -em as in Lerum and Seim, there are often large farms in the center of the village. New farm names with -city and -country were also established in the Viking Age . The first farmers probably used the best areas. The largest burial grounds, the oldest archaeological finds and the oldest farm names are found where the arable land is richest and most spacious.
It is unclear whether the settlement expansion in Roman times, migrations and the Iron Age is due to immigration or internal development and population growth. Among other things, it is difficult to demonstrate where in Europe the immigrants have come from. The permanent residents had both fields (where grain was grown) and livestock that grazed in the open fields, but it is uncertain which of these was more important. Population growth from around the year 200 led to more utilization of open land, for example in the form of settlements in the mountains. During the migration period, it also seems that in parts of the country it became common to have cluster gardens or a form of village settlement.
Norwegian expansion northwards
From around the year 200, there was a certain migration by sea from Rogaland and Hordaland to Nordland and Sør-Troms. Those who moved settled down as a settled Iron Age population and became dominant over the original population which may have been Sami . The immigrant Norwegians, Bumen , farmed with livestock that were fed inside in the winter as well as some grain cultivation and fishing. The northern border of the Norwegians' settlement was originally at the Toppsundet near Harstad and around the year 500 there was a Norwegian settlement to Malangsgapet. That was as far north as it was possible to grow grain at the time. Malangen was considered the border between Hålogaland and Finnmork until around 1400 . Further into the Viking Age and the Middle Ages, there was immigration and settlement of Norwegian speakers along the coast north of Malangen. Around the year 800, Norwegians lived along the entire outer coast to Vannøy . The Norwegians partly copied Sami livelihoods such as whaling, fur hunting and reindeer husbandry. It was probably this area between Malangen and Vannøy that was Ottar from the Hålogaland area. In the Viking Age, there were also some Norwegian settlements further north and east. East of the North Cape are the scattered archaeological finds of Norwegian settlement in the Viking Age. There are Norwegian names for fjords and islands from the Viking Age, including fjord names with "-anger". Around the year 1050, there were Norwegian settlements on the outer coast of Western Finnmark. Traders and tax collectors traveled even further.
North of Malangen there were Norse farming settlements in the Iron Age. Malangen was considered Finnmark's western border until 1300. There are some archaeological traces of Norse activity around the coast from Tromsø to Kirkenes in the Viking Age. Around Tromsø, the research indicates a Norse/Sami mixed culture on the coast.
From the year 1100 and the next 200–300 years, there are no traces of Norwegian settlement north and east of Tromsø. It is uncertain whether this is due to depopulation, whether it is because the Norwegians further north were not Christianized or because there were no churches north of Lenvik or Tromsø . Norwegian settlement in the far north appears from sources from the 14th century. In the Hanseatic period , the settlement was developed into large areas specialized in commercial fishing, while earlier (in the Viking Age) there had been farms with a combination of fishing and agriculture. In 1307 , a fortress and the first church east of Tromsø were built in Vardø . Vardø became a small Norwegian town, while Vadsø remained Sami. Norwegian settlements and churches appeared along the outermost coast in the Middle Ages. After the Reformation, perhaps as a result of a decline in fish stocks or fish prices, there were Norwegian settlements in the inner fjord areas such as Lebesby in Laksefjord. Some fishing villages at the far end of the coast were abandoned for good. In the interior of Finnmark, there was no national border for a long time and Kautokeino and Karasjok were joint Norwegian-Swedish areas with strong Swedish influence. The border with Finland was established in 1751 and with Russia in 1826.
On a Swedish map from 1626, Norway's border is indicated at Malangen, while Sweden with this map showed a desire to control the Sami area which had been a common area.
The term Northern Norway only came into use at the end of the 19th century and administratively the area was referred to as Tromsø Diocese when Tromsø became a bishopric in 1840. There had been different designations previously: Hålogaland originally included only Helgeland and when Norse settlement spread north in the Viking Age and the Middle Ages, Hålogaland was used for the area north approximately to Malangen , while Finnmark or "Finnmarken", "the land of the Sami", lay outside. The term Northern Norway was coined at a cafe table in Kristiania in 1884 by members of the Nordlændingernes Forening and was first commonly used in the interwar period as it eventually supplanted "Hålogaland".
State formation
The battle in Hafrsfjord in the year 872 has long been regarded as the day when Norway became a kingdom. The year of the battle is uncertain (may have been 10-20 years later). The whole of Norway was not united in that battle: the process had begun earlier and continued a couple of hundred years later. This means that the geographical area became subject to a political authority and became a political unit. The geographical area was perceived as an area as it is known, among other things, from Ottar from Hålogaland's account for King Alfred of Wessex around the year 880. Ottar described "the land of the Norwegians" as very long and narrow, and it was narrowest in the far north. East of the wasteland in the south lay Sveoland and in the north lay Kvenaland in the east. When Ottar sailed south along the land from his home ( Malangen ) to Skiringssal, he always had Norway ("Nordveg") on his port side and the British Isles on his starboard side. The journey took a good month. Ottar perceived "Nordveg" as a geographical unit, but did not imply that it was a political unit. Ottar separated Norwegians from Swedes and Danes. It is unclear why Ottar perceived the population spread over such a large area as a whole. It is unclear whether Norway as a geographical term or Norwegians as the name of a ethnic group is the oldest. The Norwegians had a common language which in the centuries before Ottar did not differ much from the language of Denmark and Sweden.
According to Sverre Steen, it is unlikely that Harald Hårfagre was able to control this entire area as one kingdom. The saga of Harald was written 300 years later and at his death Norway was several smaller kingdoms. Harald probably controlled a larger area than anyone before him and at most Harald's kingdom probably included the coast from Trøndelag to Agder and Vestfold as well as parts of Viken . There were probably several smaller kingdoms of varying extent before Harald and some of these are reflected in traditional landscape names such as Ranrike and Ringerike . Landscape names of "-land" (Rogaland) and "-mark" (Hedmark) as well as names such as Agder and Sogn may have been political units before Harald.
According to Sverre Steen, the national assembly was completed at the earliest at the battle of Stiklestad in 1030 and the introduction of Christianity was probably a significant factor in the establishment of Norway as a state. Håkon I the good Adalsteinsfostre introduced the leasehold system where the "coastal land" (as far as the salmon went up the rivers) was divided into ship raiders who were to provide a longship with soldiers and supplies. The leidange was probably introduced as a defense against the Danes. The border with the Danes was traditionally at the Göta älv and several times before and after Harald Hårfagre the Danes had control over central parts of Norway.
Christianity was known and existed in Norway before Olav Haraldson's time. The spread occurred both from the south (today's Denmark and northern Germany) and from the west (England and Ireland). Ansgar of Bremen , called the "Apostle of the North", worked in Sweden, but he was never in Norway and probably had little influence in the country. Viking expeditions brought the Norwegians of that time into contact with Christian countries and some were baptized in England, Ireland and northern France. Olav Tryggvason and Olav Haraldson were Vikings who returned home. The first Christians in Norway were also linked to pre-Christian local religion, among other things, by mixing Christian symbols with symbols of Odin and other figures from Norse religion.
According to Sverre Steen, the introduction of Christianity in Norway should not be perceived as a nationwide revival. At Mostratinget, Christian law was introduced as law in the country and later incorporated into the laws of the individual jurisdictions. Christianity primarily involved new forms in social life, among other things exposure and images of gods were prohibited, it was forbidden to "put out" unwanted infants (to let them die), and it was forbidden to have multiple wives. The church became a nationwide institution with a special group of officials tasked with protecting the church and consolidating the new religion. According to Sverre Steen, Christianity and the church in the Middle Ages should therefore be considered together, and these became a new unifying factor in the country. The church and Christianity linked Norway to Roman Catholic Europe with Church Latin as the common language, the same time reckoning as the rest of Europe and the church in Norway was arranged much like the churches in Denmark, Sweden and England. Norway received papal approval in 1070 and became its own church province in 1152 with Archbishop Nidaros .
With Christianity, the country got three social powers: the peasants (organized through the things), the king with his officials and the church with the clergy. The things are the oldest institution: At allthings all armed men had the right to attend (in part an obligation to attend) and at lagthings met emissaries from an area (that is, the lagthings were representative assemblies). The Thing both ruled in conflicts and established laws. The laws were memorized by the participants and written down around the year 1000 or later in the Gulationsloven , Frostatingsloven , Eidsivatingsloven and Borgartingsloven . The person who had been successful at the hearing had to see to the implementation of the judgment themselves.
Early Middle Ages (1050s–1184)
The early Middle Ages is considered in Norwegian history to be the period between the end of the Viking Age around 1050 and the coronation of King Sverre in 1184 . The beginning of the period can be dated differently, from around the year 1000 when the Christianization of the country took place and up to 1100 when the Viking Age was over from an archaeological point of view. From 1035 to 1130 it was a time of (relative) internal peace in Norway, even several of the kings attempted campaigns abroad, including in 1066 and 1103 .
During this period, the church's organization was built up. This led to a gradual change in religious customs. Religion went from being a domestic matter to being regulated by common European Christian law and the royal power gained increased power and influence. Slavery (" servitude ") was gradually abolished. The population grew rapidly during this period, as the thousands of farm names ending in -rud show.
The urbanization of Norway is a historical process that has slowly but surely changed Norway from the early Viking Age to today, from a country based on agriculture and sea salvage, to increasingly trade and industry. As early as the ninth century, the country got its first urban community, and in the eleventh century we got the first permanent cities.
In the 1130s, civil war broke out . This was due to a power struggle and that anyone who claimed to be the king's son could claim the right to the throne. The disputes escalated into extensive year-round warfare when Sverre Sigurdsson started a rebellion against the church's and the landmen's candidate for the throne , Magnus Erlingsson .
Emergence of cities
The oldest Norwegian cities probably emerged from the end of the 9th century. Oslo, Bergen and Nidaros became episcopal seats, which stimulated urban development there, and the king built churches in Borg , Konghelle and Tønsberg. Hamar and Stavanger became new episcopal seats and are referred to in the late 12th century as towns together with the trading places Veøy in Romsdal and Kaupanger in Sogn. In the late Middle Ages, Borgund (on Sunnmøre), Veøy (in Romsdalsfjorden) and Vågan (in Lofoten) were referred to as small trading places. Urbanization in Norway occurred in few places compared to the neighboring countries, only 14 places appear as cities before 1350. Stavanger became a bishopric around 1120–1130, but it is unclear whether the place was already a city then. The fertile Jæren and outer Ryfylke were probably relatively densely populated at that time. A particularly large concentration of Irish artefacts from the Viking Age has been found in Stavanger and Nord-Jæren.
It has been difficult to estimate the population in the Norwegian medieval cities, but it is considered certain that the cities grew rapidly in the Middle Ages. Oscar Albert Johnsen estimated the city's population before the Black Death at 20,000, of which 7,000 in Bergen, 3,000 in Nidaros, 2,000 in Oslo and 1,500 in Tunsberg. Based on archaeological research, Lunden estimates that Oslo had around 1,500 inhabitants in 250 households in the year 1300. Bergen was built up more densely and, with the concentration of exports there, became Norway's largest city in a special position for several hundred years. Knut Helle suggests a city population of 20,000 at most in the High Middle Ages, of which almost half in Bergen.
The Bjarkøyretten regulated the conditions in cities (especially Bergen and Nidaros) and in trading places, and for Nidaros had many of the same provisions as the Frostating Act . Magnus Lagabøte's city law replaced the bjarkøretten and from 1276 regulated the settlement in Bergen and with corresponding laws also drawn up for Oslo, Nidaros and Tunsberg. The city law applied within the city's roof area . The City Act determined that the city's public streets consisted of wide commons (perpendicular to the shoreline) and ran parallel to the shoreline, similarly in Nidaros and Oslo. The roads were small streets of up to 3 cubits (1.4 metres) and linked to the individual property. From the Middle Ages, the Norwegian cities were usually surrounded by wooden fences. The urban development largely consisted of low wooden houses which stood in contrast to the relatively numerous and dominant churches and monasteries built in stone.
The City Act and supplementary provisions often determined where in the city different goods could be traded, in Bergen, for example, cattle and sheep could only be traded on the Square, and fish only on the Square or directly from the boats at the quayside. In Nidaros, the blacksmiths were required to stay away from the densely populated areas due to the risk of fire, while the tanners had to stay away from the settlements due to the strong smell. The City Act also attempted to regulate the influx of people into the city (among other things to prevent begging in the streets) and had provisions on fire protection. In Oslo, from the 13th century or earlier, it was common to have apartment buildings consisting of single buildings on a couple of floors around a courtyard with access from the street through a gate room. Oslo's medieval apartment buildings were home to one to four households. In the urban farms, livestock could be kept, including pigs and cows, while pastures and fields were found in the city's rooftops . In the apartment buildings there could be several outbuildings such as warehouses, barns and stables. Archaeological excavations show that much of the buildings in medieval Oslo, Trondheim and Tønsberg resembled the oblong farms that have been preserved at Bryggen in Bergen . The land boundaries in Oslo appear to have persisted for many hundreds of years, in Bergen right from the Middle Ages to modern times.
High Middle Ages (1184–1319)
After civil wars in the 12th century, the country had a relative heyday in the 13th century. Iceland and Greenland came under the royal authority in 1262 , and the Norwegian Empire reached its greatest extent under Håkon IV Håkonsson . The last king of Haraldsätten, Håkon V Magnusson , died sonless in 1319 . Until the 17th century, Norway stretched all the way down to the mouth of Göta älv , which was then Norway's border with Sweden and Denmark.
Just before the Black Death around 1350, there were between 65,000 and 85,000 farms in the country, and there had been a strong growth in the number of farms from 1050, especially in Eastern Norway. In the High Middle Ages, the church or ecclesiastical institutions controlled 40% of the land in Norway, while the aristocracy owned around 20% and the king owned 7%. The church and monasteries received land through gifts from the king and nobles, or through inheritance and gifts from ordinary farmers.
Settlement and demography in the Middle Ages
Before the Black Death, there were more and more farms in Norway due to farm division and clearing. The settlement spread to more marginal agricultural areas higher inland and further north. Eastern Norway had the largest areas to take off and had the most population growth towards the High Middle Ages. Along the coast north of Stad, settlement probably increased in line with the extent of fishing. The Icelandic Rimbegla tells around the year 1200 that the border between Finnmark (the land of the Sami) and resident Norwegians in the interior was at Malangen , while the border all the way out on the coast was at Kvaløya . From the end of the High Middle Ages, there were more Norwegians along the coast of Finnmark and Nord-Troms. In the inner forest and mountain tracts along the current border between Norway and Sweden, the Sami exploited the resources all the way down to Hedmark.
There are no censuses or other records of population and settlement in the Middle Ages. At the time of the Reformation, the population was below 200,000 and only in 1650 was the population at the same level as before the Black Death. When Christianity was introduced after the year 1000, the population was around 200,000. After the Black Death, many farms and settlements were abandoned and deserted, in the most marginal agricultural areas up to 80% of the farms were abandoned. Places such as Skien, Veøy and Borgund (Ålesund) went out of use as trading towns. By the year 1300, the population was somewhere between 300,000 and 560,000 depending on the calculation method. Common methods start from detailed information about farms in each village and compare this with the situation in 1660 when there are good headcounts. From 1300 to 1660, there was a change in the economic base so that the coastal villages received a larger share of the population. The inland areas of Eastern Norway had a relatively larger population in the High Middle Ages than after the Reformation. Kåre Lunden concludes that the population in the year 1300 was close to 500,000, of which 15,000 lived in cities. Lunden believes that the population in 1660 was still slightly lower than the peak before the Black Death and points out that farm settlement in 1660 did not reach the same extent as in the High Middle Ages. In 1660, the population in Troms and Finnmark was 6,000 and 3,000 respectively (2% of the total population), in 1300 these areas had an even smaller share of the country's population and in Finnmark there were hardly any Norwegian-speaking inhabitants. In the High Middle Ages, the climate was more favorable for grain cultivation in the north. Based on the number of farms, the population increased 162% from 1000 to 1300, in Northern and Western Europe as a whole the growth was 200% in the same period.
Late Middle Ages (1319–1537)
Due to repeated plague epidemics, the population was roughly halved and the least productive of the country's farms were laid waste. It took several hundred years before the population again reached the level before 1349 . However, those who survived the epidemics gained more financial resources by sharing. Tax revenues for the state almost collapsed, and a large part of the noble families died out or sank into peasant status due to the fall in national debt . The Hanseatic League took over trade and shipping and dominated fish exports. The Archbishop of Nidaros was the country's most powerful man economically and politically, as the royal dynasty married into the Swedish in 1319 and died out in 1387 . Eventually, Copenhagen became the political center of the kingdom and Bergen the commercial center, while Trondheim remained the religious center.
From Reformation to Autocracy (1537–1660)
In 1537 , the Reformation was carried out in Norway. With that, almost half of the country's property was confiscated by the royal power at the stroke of a pen. The large seizure increased the king's income and was able, among other things, to expand his military power and consolidated his power in the kingdom. From roughly the time of the Reformation and in the following centuries, the state increased its power and importance in people's lives. Until around 1620, the state administration was fairly simple and unspecialised: in Copenhagen, the central administration mainly consisted of a chancellery and an interest chamber ; and sheriffs ruled the civil (including bailiffs and sheriffs) and the military in their district, the sheriffs collected taxes and oversaw business. The accounts were not clear and without summaries. The clergy, which had great power as a separate organization, was appointed by the state church after the Reformation, administered from Copenhagen. In this period, Norway was ruled by (mainly) Danish noble sheriffs, who acted as intermediaries between the peasants and the Oldenborg king in the field of justice, tax and customs collection.
From 1620, the state apparatus went through major changes where specialization of functions was a main issue. The sheriff's tasks were divided between several, more specialized officials - the sheriffs retained the formal authority over these, who in practice were under the national administration in Copenhagen. Among other things, a separate military officer corps was established, a separate customs office was established and separate treasurers for taxes and fees were appointed. The Overbergamtet, the central governing body for overseeing mining operations in Norway, was established in 1654 with an office in Christiania and this agency was to oversee the mining chiefs in the Nordenfjeld and Sønnenfjeld areas (the mines at Kongsberg and Røros were established in the previous decades). The formal transition from county government to official government with fixed-paid county officials took place after 1660, but the real changes had taken place from around 1620. The increased specialization and transition to official government meant that experts, not amateurs, were in charge of each area, and this civil service meant, according to Sverre Steen that the dictatorship was not a personal dictatorship.
From 1570 until 1721, the Oldenborg dynasty was in repeated wars with the Vasa dynasty in Sweden. The financing of these wars led to a severe increase in taxation which caused great distress.
Politically-geographically, the Oldenborg kings had to cede to Sweden the Norwegian provinces of Jemtland , Herjedalen , Idre and Särna , as well as Båhuslen . As part of the financing of the wars, the state apparatus was expanded. Royal power began to assert itself to a greater extent in the administration of justice. Until this period, cases of violence and defamation had been treated as civil cases between citizens. The level of punishment was greatly increased. During this period, at least 307 people were also executed for witchcraft in Norway. Culturally, the country was marked by the fact that the written language became Danish because of the Bible translation and the University of Copenhagen's educational monopoly.
From the 16th century, business became more marked by production for sale and not just own consumption. In the past, it was particularly the fisheries that had produced such a large surplus of goods that it was sold to markets far away, the dried fish trade via Bergen is known from around the year 1100. In the 16th century, the yield from the fisheries multiplied, especially due to the introduction of herring in Western Norway and in Trøndelag and because new tools made fishing for herring and skre more efficient. Line fishing and cod nets that were introduced in the 17th century were controversial because the small fishermen believed it favored citizens in the cities.
Forestry and the timber trade became an important business, particularly because of the boom saw which made it possible to saw all kinds of tables and planks for sale abroad. The demand for timber increased at the same time in Europe, Norway had plenty of forests and in the 17th century timber became the country's most important export product. There were hundreds of sawmills in the country and the largest had the feel of factories . In 1680, the king regulated the timber trade by allowing exports only from privileged sawmills and in a certain quantity.
From the 1520s, some silver was mined in Telemark. When the peasants chased the German miners whereupon the king executed five peasants and demanded compensation from the other rebellious peasants. The background for the harsh treatment was that the king wanted to assert his authority over the extraction of precious metals. The search for metals led to the silver works at Kongsberg after 1624, copper in the mountain villages between Trøndelag and Eastern Norway, and iron, among other things, in Agder and lower Telemark. The financial gain of the quarries at that time is unclear because there are no reliable accounts. Kongsberg ma
"Hmm, according to my calculations we seemed to have missed the 2453rd century by a long-shot. We've landed in present day... Leicester city?"
Poor Danny and Clara, some holiday this turned out to be!
You didn't think I would make a TARDIS and NOT have a photo of it 'being bigger on the inside', did you? I'm hurt!
*Doctor Who, TARDIS and all related names, images, etc. are property of the British Broadcasting Corporation. All rights reserved.*
*EDIT*(09/02/16): Check out the Beyond The Brick Interview here, thanks BTB!
Trujillo is a city, with a population of 20,780 (2020 calculation), and a municipality on the northern Caribbean coast of the Honduran department of Colón, of which the city is the capital.
The municipality had a population of about 30,000 (2003). The city is located on a bluff overlooking the Bay of Trujillo. Behind the city rise two prominent mountains, Mount Capiro and Mount Calentura. Three Garifuna fishing villages—Santa Fe, San Antonio, and Guadelupe—are located along the beach.
Trujillo has received plenty of attention as the potential site of a proposed Honduran charter city project, according to an idea originally advocated by American economist Paul Romer. Often referred to as a Hong Kong in Honduras and advocated by among others the Trujillo-born Honduran president Porfirio Lobo Sosa, the project has also been met with skepticism and controversy, especially due to its supposed disregard for the local Garifuna culture.
Christopher Columbus landed in Trujillo on August 14, 1502, during his fourth and final voyage to the Americas. Columbus named the place "Punta de Caxinas". It was the first time he touched the Central American mainland. He noticed that the water in this part of the Caribbean was very deep and therefore called the area Golfo de Honduras, i.e., The Gulf of the Depths.
The history of the modern town begins in 1524, shortly after the conquest of the Aztec Empire in an expedition led by Hernán Cortés. Cortés sent Cristóbal de Olid to find a Spanish outpost in the region, and he established a town named Triunfo de la Cruz in the vicinity. When Olid began using the town as his base for establishing his own realm in Central America, Cortés sent Francisco de las Casas to remove him. Las Casas lost most of his fleet in a storm, but he was nevertheless able to defeat Olid and restore the region to Cortés. Upon assuming control, Las Casas decided to relocate the town to its present location, because the natural harbor was larger. At the same time, Triunfo de la Cruz was renamed Trujillo. His deputy, Juan López de Aguirre was charged with establishing the new town, but he sailed off, leaving another deputy, named Medina, to find the town. In the coming years Trujillo became more important as a shipment point for gold and silver mined in the interior of the country. Because of its sparse population, the city also became a frequent target of pirates.
Under Spanish rule Trujillo became the capital of Honduras, but because of its vulnerability the capital was changed to the inland town of Comayagua. The fortress, Fortaleza de Santa Bárbara (El Castillo), which sits on the bluff overlooking the bay, was built by the Spanish around 1550. Nevertheless, it was inadequate to really defend Trujillo from pirates—the largest gathering of pirates in history took place in the vicinity in 1683—or rival colonial powers: the Dutch, French, and English. The town was destroyed several times between 1633 and 1797, and during the eighteenth century, the Spanish all but abandoned Trujillo because it was deemed indefensible,
When Honduras obtained its independence from Spain in 1821, Trujillo lost its status of capital city permanently first to Comayagua, which lost it to Tegucigalpa in 1880. From this same period onwards, Trujillo began to prosper again.
In 1860, the mercenary William Walker, who had seized control of neighboring Nicaragua, was caught and executed in Trujillo by orders of Florencio Xatruch. His tomb is a local tourist attraction.
American author O. Henry (William Sydney Porter) spent about a year living in Honduras, primarily in Trujillo. He later wrote a number of short stories that took place in "Coralio" in the fictional Central American country of "Anchuria", based on the real town of Trujillo. Most of these stories appear in his book Of Cabbages and Kings.
Credit for the data above is given to the following website:
en.wikipedia.org/wiki/Trujillo,_Honduras
© All Rights Reserved - you may not use this image in any form without my prior permission.
Inside the Ladd Observatory Transit Room (built 1891)
Brown University
Providence, Rhode Island
May 6th, 2014
Ever wonder how everyone kept track of time way back when? Read below to find out. This was taken in the Transit Room inside the Brown University’s Transit Room. Above the telescope in the photo, the roof opens up exposing the North to South line. From there, the stars could be viewed and the exact time could be determined for Providence, RI. The telegraph device seen to the right of the telescope would be used to mark the time and the official time could be adjusted. If you are the least bit interested in astronomy, definitely plan to visit the Ladd Observatory when they are open to the public on Tuesday’s night, see this for more info on times: www.brown.edu/Departments/Physics/Ladd/
Some info on the Transit Room:
At the far end of the observatory one can find the old transit room. This is where an observer would use a transit telescope to time stars crossing the local meridian, an imaginary north-south line passing through the observatory. Local time could then be precisely established, which was not only needed for astronomical calculations, but also for deriving civil time. Ladd Observatory was responsible as a time keeping service for the Providence area. And speaking of time, please note the various clocks throughout the observatory. They were an important piece of this time keeping tradition.
SOURCE: www.theskyscrapers.org/-42
Some info on how the Transit Room got to be restored:
Many observatories kept time before it became a function of the federal government, said Targan, also associate dean of the College for science education. When the nation moved from an agrarian lifestyle to an industrially based economy, keeping exact time became more vital. Train accidents occurred in the 1800s because the conductors’ watches were on different times, Targan said.
Since then, new technology such as computers and navigation systems have made precise timekeeping even more important. The science of timekeeping has kept up with the technology, Targan said.
Ladd’s two telescopes are less advanced than the atomic clock at the National Institute of Science and Technology in Boulder, Colo., which sets the international standard for time, but they still allow people to learn about the science and history behind timekeeping.
“We lose track of how we determine time in the first place,” Targan said. “When people look at their watches, what does that mean and what does that come from?”
With the restoration, Targan hopes to educate the public and answer those questions.
So, where do clocks get their time?
“The Earth itself is the most reliable timepiece we have,” astronomy concentrator David Eichhorn ’09 said.
Astronomers use the rotational period of the earth to keep time. “By looking at the stars entering above, you can time those stars as they cross key imaginary lines across the sky,” Targan said.
In Ladd Observatory’s transit room, an observer can press a key that makes an extra mark on the chronometer, which is already marked to show when certain stars move across the sky. By measuring the difference between the marks, an observer can calibrate clocks accordingly, Eichhorn said.
Fixing this telegraph system is one of the many repairs planned, Targan said.
Sarah Zurier, special projects coordinator at the Rhode Island Historical Preservation and Heritage Commission, which awarded the grant, listed other intended tasks including fixing slits in the roof that provide the telescope with a view of the sky, repairing windows and upgrading the electrical system.
Zurier cited the observatory’s history of serving the public as one of the reasons for its selection for the grant.
SOURCE: www.browndailyherald.com/2008/01/24/ladds-timekeepers-to-...
Why do sociologists, when calculating the liveability score of an urban community, continue to leave out of their calculations the most significant characteristic that makes a city rich and enduring? The quality of the viola section in the local orchestra, the number of “vape shops”, the population of basson players in town (per capita); these as well as other mundane considerations are often found in the ledgers of Urban Studies departments; catalogues that contain more entries than there are cases of narcolepsy in Des Moines. The thoroughness of these tomes is undeniably impressive, but the reputations of the compilers will be forever sullied until they descend from their academic perches and get down to “brass tacks”. For too long has the really important question been neglected: how many joints in the neighbourhood serve up matzah ball soup?
The Cream City, I am sad to report, does not do well in this regard. How a congregation so rich in culture continues to be a “matzah-ball desert” is beyond my ability to explain. Everything else needed to raise up the Philistine to a higher level (and God knows, they need it), is here for the taking: the Harley-Davidson Museum, a bingo palace sitting astride the banks of the serene Menominee River, “Da Crusher Statue” that pays homage to one of the prime-movers of inspired athleticism amongst the local citizenry: those secular temples are all here for the edification of the regional populace and fortunate visiter. I could go on with the list, but do you really want me to? Back to subject at hand: where in this town are the friggn’ Matzah Balls?
It pains me to relate that currently there are only six restaurants in Milwaukee County that offer the delicacy. To be wrong in reporting this dismal number would be a blessing; please correct me if this recent research is faulty. Nothing would give me greater pleasure than to be able to describe how our streets and avenues are awash in a rich and gold-hued broth along with attendant balls. It would even help me emerge from a recent bout with ennui. Although that malaise has been with me for only a brief time (49 years to be exact) it is far past time to take on a more vigorous approach. After all, honesty in all things, especially for the food critic, is indispensable. “Truth, naked, unblushing truth, the first virtue of more serious history, must be the sole recommendation of this personal narrative. So said Edward Gibbon in his autobiography. Shouldn’t the same unblushing truth be bestowed upon not only the student of history, but also upon those poor rubes seriously in search of a decent matzah ball?
The sad, sad reality must be reported and confronted: a metropolitan area that contains one and a half million citizens (you read that number right, Sarge) and only six matzah ball venues?
Embarrassing and even shameful.
Here’s the skivvy: of these pitifully few heroic establishments, three have been offering the tasty globes for decades: “Benji's Deli” on Oakland (and their suburban branch in Fox Point) and “Jake’s” on 20th and North. Two more, “Allie Boy’s Bagelry & Luncheonette” and “Fool’s Errand” are new-commers to the sweepstakes, and one, “Bistro in the Glen,” has been in the game for close to a decade. All of them contribute respectable M-balls for the delectation of the local population.
It was only by happenstance, and a wrong turn onto National Avenue, that I came across one of these five noble ports. As it turned out, this particular shop’s version of the dish under consideration was more than respectable. It fact, the other five dispensaries had better “up their game”, or their work will soon be assigned to the dustbin of matzah-ball history. Had the turn been west instead of east from First Street onto that venerable boulevard, this important truth may never have come to light. Plus I would have ended up at my intended destination, a jollification for retired viola players (an oxymoron if there ever was one). Who needs that? Not me, Bubba. As so often may happen in the narrative of an itinerant life, making a wrong turn put me in a far better place. It took only as long as a proficient high school orchestra is able to complete Mikhail Glinka’s Overture to Ruslan and Ludmilla (approximately four minutes and 42 seconds on a good day) for the happy news to sink in: turning left instead of right placed me directly in front of Allie Boy’s Bagelry & Luncheonette.
I went in there.
Damn I’m glad I did. On the menu was not only a cornucopia of bagels, but an offering of far greater import. You guessed it, matzah ball soup. The price for a pint was a bit daunting to a pauper such as myself: eight dollars for a pint. Fortunately I had just taught a viola lesson and was flush with unexpected cash and shopping coupons redeemable selected K-Marts. One viola lesson fee for one pint of Matzah ball soup. Jimmy Carter was wrong. Life is fair after all.
Home to the ‘burbs went the soup and an “Everything Bagel.” The “Everything Bagel” was everything an “Everything Bagel” should be. In taking it out of the bag, enough sesame seeds fell off the pastry to supply the dietary needs of the Bronx Zoo Aviary during mating season. Also on the remarkable object were embedded bits of toasted garlic dried chives, and pungent black pepper; an impressive orchestration indeed. Move over, Nikolai Rimsky-Korsakov. Along with the main course came a delightful ornament, the “Shmear” of the day. It happened to a memorable one: cream cheese infused with the flavours of bourbon and maple syrup. Putting that elixir down the hatch had a spectacular effect. Could it be? National Avenue and environs were no longer there. Instead, waiting outside the door of the shop was “Up-Nort.” (Such is the vernacular used in the vicinity to describe any locale north of Brown Deer Road.) For those who have never been north of Brown Dear Road, a brief description is necessary. It is a place where “Crown Royal” flows like a meandering river and the pines wave in consanguinity with the capricious winds; in short, a far more gracious place than suburban Hoboken, New Jersey.
This culinary quodlibet had been perfectly baked. The results of that delicate and sensitive process presented a pastry that was magisterial in affect but forgivingly chewy at the same time, a two-fold pleasure and an impressive achievement of the baker’s art. Putting your chompers into the specimen might seem a bit intimidating at first. It certainly was for me, but once commenced there were no regrets. There are times when it is best to dive into the symphony and let the toasted garlic bits fly where they may. It was such a time. The journey into the interior of this particular “Everything Bagel” was worth the initial resistant, tentative nibble.
And the soup? Never has eight bucks gone so far. The dumpling itself, plopped down into a broth lightly salted and ornamented generously with a melange of carrots, soft onions and celery , was gigantic but consistently tender all the way to the distant center. There was a complexity to the object that adumbrated a quantity of exotic culinary conceits. The sphere was more than a mere matzah ball; it was a globe that contained many things. To snarf it down was to explore a new world.
All this for eight smackers? Allie and the boys should make it ten. jonathanbrodie.substack.com?r=90umj&utm_medium=ios
7x7in lokta paper, Micron 05. Almost a Spirograph effect on this one - it was a compass I used, just wrong calculations... Had to blacken out the bits I did not like, and then I decided to add colour (Sakura Metallic) :)
By my calculations there should be six candles lit tonight. This is a rework of a shot I posted last year.
From our calculations, we have reconstructed what the Moon would look like on these dates from Earth (omitting moon phases). The simulation shows that the Moon 2.5 billions of years ago had an apparent diameter 18.64% larger and an average distance 60000 km shorter than the current average. The shorter distance of 60000 km was calculated through a study of the rock strata in the Karijini National Park in Western Australia. These rocks, which would testify to a lower Earth-Moon distance of 60000 km, date back 2.46 billion years. The file is available for download at 81 million pixels with a resolution of 9000x9000 pixels.
Credit: L'Informatico Mondo di Pipplo.
Our Facebook Page: bit.ly/PipploFB
Our YouTube Channel: bit.ly/PipploYT
In the shape of a baboon, this coffin (now empty) contained the mummified animal intended as a votive offering to the god Thoth. Revered for his wisdom, Thoth was believed to be the inventor of speech and calculation and was patron of scribes. The baboon was one of his sacred incarnations. made separately, the front and back of the coffin have four holes for pegs to join them.
Photographed at the Walters Art Museum, Baltimore, Maryland.
2/31 276/366 October 2 2020
The Salvage Team discovered the wreckage after about a two-hour walk up the winding hillside. “You were wrong on your calculations there, Doc!” Marva chided. “I thought you said it would take only an hour to go these three miles.”
“That’s if we didn’t have to beat through the underbrush, encounter a few snakes…” (he paused, and said under his breath, “Why did it have to be snakes?!”) “and then ascend this slimy path made wet in the rain. If it had been dry, level ground, we could have made it in less than an hour.” Doc tried to defend his answer, as he always does, and everyone who knew him expected him to say something like this. He snapped a few photos to document the finding of the plane.
“That’s okay, guys. We finally found the plane. But you, a zoologist, afraid of snakes!” Cliff humored Doc a bit as he knew that snakes were not on his ‘favorite’ list of animals. “Marva, please call it in to Rachel that we’ve found the plane and we’ll start surveying things to see how safe this thing is to check it out.” Cliff was relieved that the plane was still somewhat intact, and this was a promising sign that the gems were safe.
Marva radioed Rachel and told her they found the plane and gave their coordinates based on the geo-tracking device that Stacy had. “That will give us the pin-point location to have the helicopter come once we’ve found the gems.”
“That may be a day or two, depending what we run into here. That plane may look intact, but it’s resting in a precarious position. The only way we can get up there would be to ascend that short mound over to the left of it.” Heinrich pointed to what looked like some temple that had been build centuries ago but now abandoned. “I’ll go get the big truck with our other tools. Stacy, could you drive the other vehicle back just a way to allow me some room?”
“Sure, Heinrich,” Stacy replied. She was an expert driver with almost any kind of all-terrain vehicle, large or small. She grew up on a farm and was driving a tractor by age ten. She also competed in dirt bike races and also did a little drag racing with her older brother when they were in college. Her life on the farm gave her the idea to study botany, and her interest in rocks and gems also got her into gemology.
Cliff was relieved that the plane was located and that they stood a good chance of recovering the gems. But he also had another uneasy feeling about this place. Something was not right, and his intuition was telling him something. He whispered, “I’ve got a bad feeling about this.”
"La Pierre des trois paroisses" is part of the group 2 Fage menhirs from the "Cham des Bondons" and was restored during works of 1982-1983. The stone was lying until recently and the stigmata of quarrying are clearly visible. I am not sure how the calculation was made, but you can read that the menhir was once 5m45 in length to its current 3m. The standing stone looks over the puech/hills covered in other posts.
Prior to 1860, 'ancient' finds were often attributed to Celtic origins, and it was only after discoveries of detailed portable art objects with depictions of long extinct animals that ideas of prehistoric and ice age cultures formed: the Englishman Christy and the Frenchman Lartet pioneering the work. Quite apart, and in parallel, from the north of England, the 1850s and 60s saw a second conceptualisation of a deep past with the formalisation of the study of the 'Rock art' that is now understood as the cup and ring petroglyphs of the bronze age (John Charles Langland, George Tate...). Today the distance between the Palaeolithic and the ages of stone and metals are understood as being from vastly different chapters of our long prehistory (unless you are a journalist of The Guardian newspaper who continually muddles the artwork of Lascaux aside the rock-art of Ilkley moor).
Writing in the 1870's Richard Jefferies wrote "The ploughman eagerly tears away the earth and moves the stone, to find a jar, as he thinks - in fact a funeral urn. Like all other uneducated people in the Far East as well as the West, he is imbued with the idea of finding hidden treasure, and breaks the urn in pieces to discover nothing; it's empty. He will carry the fragments home to the farm, where, after a moment's curiosity, they will be thrown away with potsherds and finally used to mend the floor of the cowpen."
This quote shows evidence of an awakening of a sensibility to our prehistoric past: an interesting insight, but perhaps biased against the 'uneducated'... In the middle of the 19th century, the educated Vicar of Poubeau in the region of Luchon in the Pyrenees, placed explosives in the rock d'Arriba-Pardin, one of several specific semi megalithic geologies with cultural roots in prehistory. The mark of the explosives are still visible today, and the original 2m fertility phallus stone is no longer present and needs erecting.
In 1880, de Sautuolo published drawings of the Altamira rock art just 12 years after its discovery, and the dank lethargy towards his work from his academic contemporaries shows that the understanding of 'prehistory' was not a Kuhnian revolution.
In 1937, Violet Alford, specialist of folk dancing and someone who believed that common prehistoric roots help to explain similarities of dance forms from across Europe, commented that "We remark with surprise that no governmental authority has intervened in front of the destruction of prehistoric stones, neither in the past or in the present; and it continues. In 1934 a vicar from near to Pau attacked a dolmen to finally place a cross in cement on one of its stones".
I do not have the exact date of the mining lines that were cut across this menhir, suffice to say that megaliths and rock art need goodwill on their side so that the real lines of our human times can be respected.
AJM
She has a tea shop in the middle of nowhere on the Annapurna Circuit trail. She had trouble doing calculations but it was in her best interest to serve the customers properly. I forgot to ask her name!
So this page is a bit more involved than rev1 but I make far more assumptions.
At the top is a calculation for the error in the produced "gravity" based on a person moving along a floor at 4.5m/s (FAST) and a change in radius of 6m above or below the mid level. The error ends up being around .075g but I do not know how that change would actually affect a person. Realistically, people probably won't be sprinting around this station very often so it shouldn't be too much of an issue.
To the right are VERY rough calculations for the mass of the habitat structure and of the cable. I am sure I did something wrong though because I found that a single habitat would weigh more than the ISS...
Below those are even rougher calculations for the mass of the tether. I have no experience here so this mass is probably completely crazy.
At the very bottom and to the left is a simple Center of Mass equation to determine an appropriate radius for the apparent CoM of the generator. I may make the generator a lot more massive to balance out the cable as the distance is currently 0.5km.
I'll consult with a professor about all of this if I can.
well it's almost Feb... by my calculations I have @ 6 -7 weeks till I get my first fix of speed and fury of 2012. Until then have to get my addiction soothed by cruzn thru my archives....
peace to you and yours---- ACE
...as always Image Copyright SB ImageWorks 2012. No further use without my explicit written permission.
I look around the environment around me. It seems like everyone’s still drinking as usual as I slip out, and Joe’s already blasting Joy Division for the villains. The meter bar’s running out on my wrist. Maybe I could implent the momentum feature that builds up the invisibility energy there....hmmm. Just as I was about to do the Asian calculations, Phorus has caught up with me again. Oof.
Phorus: “Going somewhere, Mr Sharp?”
Multi: “Oh yes. I’m having fun in the lounge, but I lost my way to the washroom...heh.”
Phorus: “It’s down by the right hall, turn a slight left. Do you need to be accompanied there?”
Multi: “Thanks but no thanks. I think I’m good. I’m going to look for my sidekick.”
Phorus: “Alright....” (he gives a sly grin, eyes somewhat twitching).
And then I look for Doc. By this point, he seems to be into the job very much, now wasted. Very high and drunk. If Joe didn’t hook him up with his own substances, it wouldn’t be like that.
Multi: “Man, we gotta go. Get some air and talk outside.”
Doc: “What?! Can’t ya see I’m having fun? I’ve never lost myself that much before!”
Multi: “C’mon man. I got more beer at home. You wanna have some?”
Doc: “No! No way! I am not letting you ruin my day, Jon! You gotta let the sidekicks have the fun!”
Multi: “Well, you got wasted last night too.”
Doc: “Have I?! I don’t care, just let me do what I want!”
I walk away, taking a big, deep breath, before walking back to the table Doc was dancing on, a syringe in my hand. I jump up and stab him in the neck. Some other patrons scream a little bit, and I drag his body to the nearest couch. The substance is some sort of drug, reverse toxic kind of stuff. Anaesthetic included. With Doc taken out, I wave my hand to Joe, who seems to have caught my signal. We are leaving....until some of the villains block my exit. I give them a quick excuse that I have to drive Doc to the hospital, but still, it doesn’t seem to work. I see Phorus on the opposite balcony, and he waves his hand as well, letting us go.
By the time the door opens. Phorus shouts. Everyone stops partying.
“HE’S AN UNDERCOVER! SHOOT THE INTRUDER!”
With no second thought, I run out of the door immediately. It seems as if there’s no bar fight, then a street fight is possible. However the street is crowded with a couple villains, so I decide to blast them. Realising Doc is still there under heavy fire, I drag his unconscious butt with me. I look for the backdoor as I search my way out, and once again, I nod to Joe, who still keeps the music running (on memes).
Multi: “Joe! You’ll have to play the Cheesewind album!”
Joe: “What Cheesewind album lad? I’ve never heard of it....”
Multi: “I made the lyrics possible man. Check the bottom of your bag. Blast that one.”
Joe: “Alright lad. I hope it’s original. Defo will see you on the other side. We’ll meet later.”
Then I rush through the backdoor after doing a couple unusual backflips and energy shots. Running through a couple streets with my buddy in toll, I activate my wrist watch. The lights flicker....
***
Doc: “Where am I?
Multi: “Look outside the window.”
Doc: “Oh man. Oh god.”
Multi: “You wanna man the tech here?”
Doc: “What tech? Wait what? How did I—you had a vehicle and you didn’t tell me?”
Multi: “I made one in secret, kinda like that big Batwing from TDKR but on a budget haha. I dragged your butt all the way up to various alleys.”
Doc: “Ha! That sounds fun—-hold up. What do you mean dragging my butt here? What did you do to me Jon???!!!”
Multi: “Uh....well. I guess this is a perfect time to talk when you got people shooting on your back. Yes, you got so wasted that Joe even said it looked like prom gone wrong. Or was it uni....but anyways, a substance, nice drug plus anaesthetic gas that I likely stole from the clinic and mixed it together.”
Doc: “What?! That was your idea to get me high and drunk! I didn’t want anything to do with it for investigating! And you stole from the clinic as well?!”
Multi: “Well, I mean, you were doing a great job for distraction. And Jose finally played Cheesewind. That self made album would have got you dancing like crazy. I shoulda taped that as well. Haha.”
Doc: “Jon, if you actually did that, I will really kill you. And you made an album as well??!! How many secrets are you keeping from me, your best friend?!”
Joe: “Long enough to keep those missiles coming in lads. I’m talking through the comms here. You might wanna brace it and take flight quickly.”
Multi: “On it. Just with a less overreactions.....I guess we’ll still die anyway.”
Basically, two guys on the Multimobile, about to fly into the moon ET style. With about a dozen of other guys doing Mario Kart behind me. This is very much the video game I’m not expecting.
Different forms of fluctuations of the terrestrial gravity field are observed by gravity experiments. For example, atmospheric pressure fluctuations generate a gravity-noise foreground in measurements with super-conducting gravimeters. Gravity changes caused by high-magnitude earthquakes have been detected with the satellite gravity experiment GRACE, and we expect high-frequency terrestrial gravity fluctuations produced by ambient seismic fields to limit the sensitivity of ground-based gravitational-wave (GW) detectors. Accordingly, terrestrial gravity fluctuations are considered noise and signal depending on the experiment. Here, we will focus on ground-based gravimetry. This field is rapidly progressing through the development of GW detectors. The technology is pushed to its current limits in the advanced generation of the LIGO and Virgo detectors, targeting gravity strain sensitivities better than 10−23 Hz−1/2 above a few tens of a Hz. Alternative designs for GW detectors evolving from traditional gravity gradiometers such as torsion bars, atom interferometers, and superconducting gradiometers are currently being developed to extend the detection band to frequencies below 1 Hz. The goal of this article is to provide the analytical framework to describe terrestrial gravity perturbations in these experiments. Models of terrestrial gravity perturbations related to seismic fields, atmospheric disturbances, and vibrating, rotating or moving objects, are derived and analyzed. The models are then used to evaluate passive and active gravity noise mitigation strategies in GW detectors, or alternatively, to describe their potential use in geophysics. The article reviews the current state of the field, and also presents new analyses especially with respect to the impact of seismic scattering on gravity perturbations, active gravity noise cancellation, and time-domain models of gravity perturbations from atmospheric and seismic point sources. Our understanding of terrestrial gravity fluctuations will have great impact on the future development of GW detectors and high-precision gravimetry in general, and many open questions need to be answered still as emphasized in this article.
Keywords: Terrestrial gravity, Newtonian noise, Wiener filter, Mitigation
Go to:
Introduction
In the coming years, we will see a transition in the field of high-precision gravimetry from observations of slow lasting changes of the gravity field to the experimental study of fast gravity fluctuations. The latter will be realized by the advanced generation of the US-based LIGO [1] and Europe-based Virgo [7] gravitational-wave (GW) detectors. Their goal is to directly observe for the first time GWs that are produced by astrophysical sources such as inspiraling and merging neutron-star or black-hole binaries. Feasibility of the laser-interferometric detector concept has been demonstrated successfully with the first generation of detectors, which, in addition to the initial LIGO and Virgo detectors, also includes the GEO600 [119] and TAMA300 [161] detectors, and several prototypes around the world. The impact of these projects onto the field is two-fold. First of all, the direct detection of GWs will be a milestone in science opening a new window to our universe, and marking the beginning of a new era in observational astronomy. Second, several groups around the world have already started to adapt the technology to novel interferometer concepts [60, 155], with potential applications not only in GW science, but also geophysics. The basic measurement scheme is always the same: the relative displacement of test masses is monitored by using ultra-stable lasers. Progress in this field is strongly dependent on how well the motion of the test masses can be shielded from the environment. Test masses are placed in vacuum and are either freely falling (e.g., atom clouds [137]), or suspended and seismically isolated (e.g., high-quality glass or crystal mirrors as used in all of the detectors listed above). The best seismic isolations realized so far are effective above a few Hz, which limits the frequency range of detectable gravity fluctuations. Nonetheless, low-frequency concepts are continuously improving, and it is conceivable that future detectors will be sufficiently sensitive to detect GWs well below a Hz [88].
Terrestrial gravity perturbations were identified as a potential noise source already in the first concept laid out for a laser-interferometric GW detector [171]. Today, this form of noise is known as “terrestrial gravitational noise”, “Newtonian noise”, or “gravity-gradient noise”. It has never been observed in GW detectors, but it is predicted to limit the sensitivity of the advanced GW detectors at low frequencies. The most important source of gravity noise comes from fluctuating seismic fields [151]. Gravity perturbations from atmospheric disturbances such as pressure and temperature fluctuations can become significant at lower frequencies [51]. Anthropogenic sources of gravity perturbations are easier to avoid, but could also be relevant at lower frequencies [163]. Today, we only have one example of a direct observation of gravity fluctuations, i.e., from pressure fluctuations of the atmosphere in high-precision gravimeters [128]. Therefore, almost our entire understanding of gravity fluctuations is based on models. Nonetheless, potential sensitivity limits of future large-scale GW detectors need to be identified and characterized well in advance, and so there is a need to continuously improve our understanding of terrestrial gravity noise. Based on our current understanding, the preferred option is to construct future GW detectors underground to avoid the most dominant Newtonian-noise contributions. This choice was made for the next-generation Japanese GW detector KAGRA, which is currently being constructed underground at the Kamioka site [17], and also as part of a design study for the Einstein Telescope in Europe [140, 139]. While the benefit from underground construction with respect to gravity noise is expected to be substantial in GW detectors sensitive above a few Hz [27], it can be argued that it is less effective at lower frequencies [88].
Alternative mitigation strategies includes coherent noise cancellation [42]. The idea is to monitor the sources of gravity perturbations using auxiliary sensors such as microphones and seismometers, and to use their data to generate a coherent prediction of gravity noise. This technique is successfully applied in gravimeters to reduce the foreground of atmospheric gravity noise using collocated pressure sensors [128]. It is also noteworthy that the models of the atmospheric gravity noise are consistent with observations. This should give us some confidence at least that coherent Newtonian-noise cancellation can also be achieved in GW detectors. It is evident though that a model-based prediction of the performance of coherent noise cancellation schemes is prone to systematic errors as long as the properties of the sources are not fully understood. Ongoing experiments at the Sanford Underground Research Facility with the goal to characterize seismic fields in three dimensions are expected to deliver first data from an underground seismometer array in 2015 (see [89] for results from an initial stage of the experiment). While most people would argue that constructing GW detectors underground is always advantageous, it is still necessary to estimate how much is gained and whether the science case strongly profits from it. This is a complicated problem that needs to be answered as part of a site selection process.
More recently, high-precision gravity strainmeters have been considered as monitors of geophysical signals [83]. Analytical models have been calculated, which allow us to predict gravity transients from seismic sources such as earthquakes. It was suggested to implement gravity strainmeters in existing earthquake-early warning systems to increase warning times. It is also conceivable that an alternative method to estimate source parameters using gravity signals will improve our understanding of seismic sources. Potential applications must still be investigated in greater detail, but the study already demonstrates that the idea to use GW technology to realize new geophysical sensors seems feasible. As explained in [49], gravitational forces start to dominate the dynamics of seismic phenomena below about 1 mHz (which coincides approximately with a similar transition in atmospheric dynamics where gravity waves start to dominate over other forms of oscillations [164]). Seismic isolation would be ineffective below 1 mHz since the gravitational acceleration of a test mass produced by seismic displacement becomes comparable to the seismic acceleration itself. Therefore, we claim that 10 mHz is about the lowest frequency at which ground-based gravity strainmeters will ever be able to detect GWs, and consequently, modelling terrestrial gravity perturbations in these detectors can focus on frequencies above 10 mHz.
This article is divided into six main sections. Section 2 serves as an introduction to gravity measurements focussing on the response mechanisms and basic properties of gravity sensors. Section 3 describes models of gravity perturbations from ambient seismic fields. The results can be used to estimate noise spectra at the surface and underground. A subsection is devoted to the problem of noise estimation in low-frequency GW detectors, which differs from high-frequency estimates mostly in that gravity perturbations are strongly correlated between different test masses. In the low-frequency regime, the gravity noise is best described as gravity-gradient noise. Section 4 is devoted to time domain models of transient gravity perturbations from seismic point sources. The formalism is applied to point forces and shear dislocations. The latter allows us to estimate gravity perturbations from earthquakes. Atmospheric models of gravity perturbations are presented in Section 5. This includes gravity perturbations from atmospheric temperature fields, infrasound fields, shock waves, and acoustic noise from turbulence. The solution for shock waves is calculated in time domain using the methods of Section 4. A theoretical framework to calculate gravity perturbations from objects is given in Section 6. Since many different types of objects can be potential sources of gravity perturbations, the discussion focusses on the development of a general method instead of summarizing all of the calculations that have been done in the past. Finally, Section 7 discusses possible passive and active noise mitigation strategies. Due to the complexity of the problem, most of the section is devoted to active noise cancellation providing the required analysis tools and showing limitations of this technique. Site selection is the main topic under passive mitigation, and is discussed in the context of reducing environmental noise and criteria relevant to active noise cancellation. Each of these sections ends with a summary and a discussion of open problems. While this article is meant to be a review of the current state of the field, it also presents new analyses especially with respect to the impact of seismic scattering on gravity perturbations (Sections 3.3.2 and 3.3.3), active gravity noise cancellation (Section 7.1.3), and timedomain models of gravity perturbations from atmospheric and seismic point sources (Sections 4.1, 4.5, and 5.3).
Even though evident to experts, it is worth emphasizing that all calculations carried out in this article have a common starting point, namely Newton’s universal law of gravitation. It states that the attractive gravitational force equation M1 between two point masses m1, m2 is given by
equation M21
where G = 6.672 × 10−11 N m2/kg2 is the gravitational constant. Eq. (1) gives rise to many complex phenomena on Earth such as inner-core oscillations [156], atmospheric gravity waves [157], ocean waves [94, 177], and co-seismic gravity changes [122]. Due to its importance, we will honor the eponym by referring to gravity noise as Newtonian noise in the following. It is thereby clarified that the gravity noise models considered in this article are non-relativistic, and propagation effects of gravity changes are neglected. While there could be interesting scenarios where this approximation is not fully justified (e.g., whenever a gravity perturbation can be sensed by several sensors and differences in arrival times can be resolved), it certainly holds in any of the problems discussed in this article. We now invite the reader to enjoy the rest of the article, and hope that it proves to be useful.
Go to:
Gravity Measurements
In this section, we describe the relevant mechanisms by which a gravity sensor can couple to gravity perturbations, and give an overview of the most widely used measurement schemes: the (relative) gravimeter [53, 181], the gravity gradiometer [125], and the gravity strainmeter. The last category includes the large-scale GW detectors Virgo [6], LIGO [91], GEO600 [119], KAGRA [17], and a new generation of torsion-bar antennas currently under development [13]. Also atom interferometers can potentially be used as gravity strainmeters in the future [62]. Strictly speaking, none of the sensors only responds to a single field quantity (such as changes in gravity acceleration or gravity strain), but there is always a dominant response mechanism in each case, which justifies to give the sensor a specific name. A clear distinction between gravity gradiometers and gravity strainmeters has never been made to our knowledge. Therefore the sections on these two measurement principles will introduce a definition, and it is by no means the only possible one. Later on in this article, we almost exclusively discuss gravity models relevant to gravity strainmeters since the focus lies on gravity fluctuations above 10 mHz. Today, the sensitivity near 10 mHz of gravimeters towards gravity fluctuations is still competitive to or exceeds the sensitivity of gravity strainmeters, but this is likely going to change in the future so that we can expect strainmeters to become the technology of choice for gravity observations above 10 mHz [88]. The following sections provide further details on this statement. Space-borne gravity experiments such as GRACE [167] will not be included in this overview. The measurement principle of GRACE is similar to that of gravity strainmeters, but only very slow changes of Earth gravity field can be observed, and for this reason it is beyond the scope of this article.
The different response mechanisms to terrestrial gravity perturbations are summarized in Section 2.1. While we will identify the tidal forces acting on the test masses as dominant coupling mechanism, other couplings may well be relevant depending on the experiment. The Shapiro time delay will be discussed as the only relativistic effect. Higher-order relativistic effects are neglected. All other coupling mechanisms can be calculated using Newtonian theory including tidal forces, coupling in static non-uniform gravity fields, and coupling through ground displacement induced by gravity fluctuations. In Sections 2.2 to 2.4, the different measurement schemes are explained including a brief summary of the sensitivity limitations (choosing one of a few possible experimental realizations in each case). As mentioned before, we will mostly develop gravity models relevant to gravity strainmeters in the remainder of the article. Therefore, the detailed discussion of alternative gravimetry concepts mostly serves to highlight important differences between these concepts, and to develop a deeper understanding of the instruments and their role in gravity measurements.
Gravity response mechanisms
Gravity acceleration and tidal forces We will start with the simplest mechanism of all, the acceleration of a test mass in the gravity field. Instruments that measure the acceleration are called gravimeters. A test mass inside a gravimeter can be freely falling such as atom clouds [181] or, as suggested as possible future development, even macroscopic objects [72]. Typically though, test masses are supported mechanically or magnetically constraining motion in some of its degrees of freedom. A test mass suspended from strings responds to changes in the horizontal gravity acceleration. A test mass attached at the end of a cantilever with horizontal equilibrium position responds to changes in vertical gravity acceleration. The support fulfills two purposes. First, it counteracts the static gravitational force in a way that the test mass can respond to changes in the gravity field along a chosen degree of freedom. Second, it isolates the test mass from vibrations. Response to signals and isolation performance depend on frequency. If the support is modelled as a linear, harmonic oscillator, then the test mass response to gravity changes extends over all frequencies, but the response is strongly suppressed below the oscillators resonance frequency. The response function between the gravity perturbation δg(ω) and induced test mass acceleration δa(ω) assumes the form
equation M32
where we have introduced a viscous damping parameter γ, and ω0 is the resonance frequency. Well below resonance, the response is proportional to ω2, while it is constant well above resonance. Above resonance, the supported test mass responds like a freely falling mass, at least with respect to “soft” directions of the support. The test-mass response to vibrations δα(ω) of the support is given by
equation M43
This applies for example to horizontal vibrations of the suspension points of strings that hold a test mass, or to vertical vibrations of the clamps of a horizontal cantilever with attached test mass. Well above resonance, vibrations are suppressed by ω−2, while no vibration isolation is provided below resonance. The situation is somewhat more complicated in realistic models of the support especially due to internal modes of the mechanical system (see for example [76]), or due to coupling of degrees of freedom [121]. Large mechanical support structures can feature internal resonances at relatively low frequencies, which can interfere to some extent with the desired performance of the mechanical support [173]. While Eqs. (2) and (3) summarize the properties of isolation and response relevant for this paper, details of the readout method can fundamentally impact an instrument’s response to gravity fluctuations and its susceptibility to seismic noise, as explained in Sections 2.2 to 2.4.
Next, we discuss the response to tidal forces. In Newtonian theory, tidal forces cause a relative acceleration δg12(ω) between two freely falling test masses according to
equation M54
where equation M6 is the Fourier amplitude of the gravity potential. The last equation holds if the distance r12 between the test masses is sufficiently small, which also depends on the frequency. The term equation M7 is called gravity-gradient tensor. In Newtonian approximation, the second time integral of this tensor corresponds to gravity strain equation M8, which is discussed in more detail in Section 2.4. Its trace needs to vanish in empty space since the gravity potential fulfills the Poisson equation. Tidal forces produce the dominant signals in gravity gradiometers and gravity strainmeters, which measure the differential acceleration or associated relative displacement between two test masses (see Sections 2.3 and 2.4). If the test masses used for a tidal measurement are supported, then typically the supports are designed to be as similar as possible, so that the response in Eq. (2) holds for both test masses approximately with the same parameter values for the resonance frequencies (and to a lesser extent also for the damping). For the purpose of response calibration, it is less important to know the parameter values exactly if the signal is meant to be observed well above the resonance frequency where the response is approximately equal to 1 independent of the resonance frequency and damping (here, “well above” resonance also depends on the damping parameter, and in realistic models, the signal frequency also needs to be “well below” internal resonances of the mechanical support).
Shapiro time delay Another possible gravity response is through the Shapiro time delay [19]. This effect is not universally present in all gravity sensors, and depends on the readout mechanism. Today, the best sensitivities are achieved by reflecting laser beams from test masses in interferometric configurations. If the test mass is displaced by gravity fluctuations, then it imprints a phase shift onto the reflected laser, which can be observed in laser interferometers, or using phasemeters. We will give further details on this in Section 2.4. In Newtonian gravity, the acceleration of test masses is the only predicted response to gravity fluctuations. However, from general relativity we know that gravity also affects the propagation of light. The leading-order term is the Shapiro time delay, which produces a phase shift of the laser beam with respect to a laser propagating in flat space. It can be calculated from the weak-field spacetime metric (see chapter 18 in [124]):
equation M95
Here, c is the speed of light, ds is the so-called line element of a path in spacetime, and equation M10. Additionally, for this metric to hold, motion of particles in the source of the gravity potential responsible for changes of the gravity potential need to be much slower than the speed of light, and also stresses inside the source must be much smaller than its mass energy density. All conditions are fulfilled in the case of Earth gravity field. Light follows null geodesics with ds2 = 0. For the spacetime metric in Eq. (5), we can immediately write
equation M116
As we will find out, this equation can directly be used to calculate the time delay as an integral along a straight line in terms of the coordinates equation M12, but this is not immediately clear since light bends in a gravity field. So one may wonder if integration along the proper light path instead of a straight line yields additional significant corrections. The so-called geodesic equation must be used to calculate the path. It is a set of four differential equations, one for each coordinate t, equation M13 in terms of a parameter λ. The weak-field geodesic equation is obtained from the metric in Eq. (5):
equation M147
where we have made use of Eq. (6) and the slow-motion condition equation M15. The coordinates equation M16 are to be understood as functions of λ. Since the deviation of a straight path is due to a weak gravity potential, we can solve these equations by perturbation theory introducing expansions equation M17 and t = t(0) +t(1) + …. The superscript indicates the order in ψ/c2. The unperturbed path has the simple parametrization
equation M188
We have chosen integration constants such that unperturbed time t(0) and parameter λ can be used interchangeably (apart from a shift by t0). Inserting these expressions into the right-hand side of Eq. (7), we obtain
equation M199
As we can see, up to linear order in equation M20, the deviation equation M21 is in orthogonal direction to the unperturbed path equation M22, which means that the deviation can be neglected in the calculation of the time delay. After some transformations, it is possible to derive Eq. (6) from Eq. (9), and this time we find explicitly that the right-hand-side of the equation only depends on the unperturbed coordinates1. In other words, we can integrate the time delay along a straight line as defined in Eq. (8), and so the total phase integrated over a travel distance L is given by
equation M2310
In static gravity fields, the phase shift doubles if the light is sent back since not only the direction of integration changes, but also the sign of the expression substituted for dt/dλ.
Gravity induced ground motion As we will learn in Section 3, seismic fields produce gravity perturbations either through density fluctuations of the ground, or by displacing interfaces between two materials of different density. It is also well-known in seismology that seismic fields can be affected significantly by self-gravity. Self-gravity means that the gravity perturbation produced by a seismic field acts back on the seismic field. The effect is most significant at low frequency where gravity induced acceleration competes against acceleration from elastic forces. In seismology, low-frequency seismic fields are best described in terms of Earth’s normal modes [55]. Normal modes exist as toroidal modes and spheroidal modes. Spheroidal modes are influenced by self-gravity, toroidal modes are not. For example, predictions of frequencies and shapes of spheroidal modes based on Earth models such as PREM (Preliminary Reference Earth Model) [68] are inaccurate if self-gravity effects are excluded. What this practically means is that in addition to displacement amplitudes, gravity becomes a dynamical variable in the elastodynamic equations that determine the normal-mode properties. Therefore, seismic displacement and gravity perturbation cannot be separated in normal-mode formalism (although self-gravity can be neglected in calculations of spheroidal modes at sufficiently high frequency).
In certain situations, it is necessary or at least more intuitive to separate gravity from seismic fields. An exotic example is Earth’s response to GWs [67, 49, 47, 30, 48]. Another example is the seismic response to gravity perturbations produced by strong seismic events at large distance to the source as described in Section 4. It is more challenging to analyze this scenario using normal-mode formalism. The sum over all normal modes excited by the seismic event (each of which describing a global displacement field) must lead to destructive interference of seismic displacement at large distances (where seismic waves have not yet arrived), but not of the gravity amplitudes since gravity is immediately perturbed everywhere. It can be easier to first calculate the gravity perturbation from the seismic perturbation, and then to calculate the response of the seismic field to the gravity perturbation at larger distance. This method will be adopted in this section. Gravity fields will be represented as arbitrary force or tidal fields (detailed models are presented in later sections), and we simply calculate the response of the seismic field. Normal-mode formalism can be avoided only at sufficiently high frequencies where the curvature of Earth does not significantly influence the response (i.e., well above 10 mHz). In this section, we will model the ground as homogeneous half space, but also more complex geologies can in principle be assumed.
Gravity can be introduced in two ways into the elastodynamic equations, as a conservative force −∇ψ [146, 169], or as tidal strain The latter method was described first by Dyson to calculate Earth’s response to GWs [67]. The approach also works for Newtonian gravity, with the difference that the tidal field produced by a GW is necessarily a quadrupole field with only two degrees of freedom (polarizations), while tidal fields produced by terrestrial sources are less constrained. Certainly, GWs can only be fully described in the framework of general relativity, which means that their representation as a Newtonian tidal field cannot be used to explain all possible observations [124]. Nonetheless, important here is that Dyson’s method can be extended to Newtonian tidal fields. Without gravity, the elastodynamic equations for small seismic displacement can be written as
equation M2411
where equation M25 is the seismic displacement field, and equation M26 is the stress tensor [9]. In the absence of other forces, the stress is determined by the seismic field. In the case of a homogeneous and isotropic medium, the stress tensor for small seismic displacement can be written as
equation M2712
The quantity equation M28 is known as seismic strain tensor, and λ, μ are the Lamé constants (see Section 3.1). Its trace is equal to the divergence of the displacement field. Dyson introduced the tidal field from first principles using Lagrangian mechanics, but we can follow a simpler approach. Eq. (12) means that a stress field builds up in response to a seismic strain field, and the divergence of the stress field acts as a force producing seismic displacement. The same happens in response to a tidal field, which we represent as gravity strain equation M29. A strain field changes the distance between two freely falling test masses separated by equation M30 by equation M312. For sufficiently small distances L, the strain field can be substituted by the second time integral of the gravity-gradient tensor equation M32. If the masses are not freely falling, then the strain field acts as an additional force. The corresponding contribution to the material’s stress tensor can be written
equation M3313
Since we assume that the gravity field is produced by a distant source, the local contribution to gravity perturbations is neglected, which means that the gravity potential obeys the Laplace equation, equation M34. Calculating the divergence of the stress tensor according to Eq. (11), we find that the gravity term vanishes! This means that a homogeneous and isotropic medium does not respond to gravity strain fields. However, we have to be more careful here. Our goal is to calculate the response of a half-space to gravity strain. Even if the half-space is homogeneous, the Lamé constants change discontinuously across the surface. Hence, at the surface, the divergence of the stress tensor reads
equation M3514
In other words, tidal fields produce a force onto an elastic medium via gradients in the shear modulus (second Lamé constant). The gradient of the shear modulus can be written in terms of a Dirac delta function, equation M36, for a flat surface at z = 0 with unit normal vector equation M37. The response to gravity strain fields is obtained applying the boundary condition of vanishing surface traction, equation M38:
equation M3915
Once the seismic strain field is calculated, it can be used to obtain the seismic stress, which determines the displacement field equation M40 according to Eq. (11). In this way, one can for example calculate that a seismometer or gravimeter can observe GWs by monitoring surface displacement as was first calculated by Dyson [67].
Coupling in non-uniform, static gravity fields If the gravity field is static, but non-uniform, then displacement equation M41 of the test mass in this field due to a non-gravitational fluctuating force is associated with a changing gravity acceleration according to
equation M4216
We introduce a characteristic length λ, over which gravity acceleration varies significantly. Hence, we can rewrite the last equation in terms of the associated test-mass displacement ζ
equation M4317
where we have neglected directional dependence and numerical factors. The acceleration change from motion in static, inhomogeneous fields is generally more significant at low frequencies. Let us consider the specific case of a suspended test mass. It responds to fluctuations in horizontal gravity acceleration. The test mass follows the motion of the suspension point in vertical direction (i.e., no seismic isolation), while seismic noise in horizontal direction is suppressed according to Eq. (3). Accordingly, it is possible that the unsuppressed vertical (z-axis) seismic noise ξz(t) coupling into the horizontal (x-axis) motion of the test mass through the term ∂xgz = ∂zgx dominates over the gravity response term in Eq. (2). Due to additional coupling mechanisms between vertical and horizontal motion in real seismic-isolation systems, test masses especially in GW detectors are also isolated in vertical direction, but without achieving the same noise suppression as in horizontal direction. For example, the requirements on vertical test-mass displacement for Advanced LIGO are a factor 1000 less stringent than on the horizontal displacement [22]. Requirements can be set on the vertical isolation by estimating the coupling of vertical motion into horizontal motion, which needs to take the gravity-gradient coupling of Eq. (16) into account. Although, because of the frequency dependence, gravity-gradient effects are more significant in low-frequency detectors, such as the space-borne GW detector LISA [154].
Next, we calculate an estimate of gravity gradients in the vicinity of test masses in large-scale GW detectors, and see if the gravity-gradient coupling matters compared to mechanical vertical-to-horizontal coupling.
One contribution to gravity gradients will come from the vacuum chamber surrounding the test mass. We approximate the shape of the chamber as a hollow cylinder with open ends (open ends just to simplify the calculation). In our calculation, the test mass can be offset from the cylinder axis and be located at any distance to the cylinder ends (we refer to this coordinate as height). The gravity field can be expressed in terms of elliptic integrals, but the explicit solution is not of concern here. Instead, let us take a look at the results in Figure Figure1.1. Gravity gradients ∂zgx vanish if the test mass is located on the symmetry axis or at height L/2. There are also two additional ∂zgx = 0 contour lines starting at the symmetry axis at heights ∼ 0.24 and ∼0.76. Let us assume that the test mass is at height 0.3L, a distance 0.05L from the cylinder axis, the total mass of the cylinder is M = 5000 kg, and the cylinder height is L = 4 m. In this case, the gravity-gradient induced vertical-to-horizontal coupling factor at 20 Hz is
equation M4418
This means that gravity-gradient induced coupling is extremely weak, and lies well below estimates of mechanical coupling (of order 0.001 in Advanced LIGO3). Even though the vacuum chamber was modelled with a very simple shape, and additional asymmetries in the mass distribution around the test mass may increase gravity gradients, it still seems very unlikely that the coupling would be significant. As mentioned before, one certainly needs to pay more attention when calculating the coupling at lower frequencies. The best procedure is of course to have a 3D model of the near test-mass infrastructure available and to use it for a precise calculation of the gravity-gradient field.
An external file that holds a picture, illustration, etc.
Object name is 41114_2016_3_Fig1.jpg
Figure 1
Gravity gradients inside hollow cylinder. The total height of the cylinder is L, and M is its total mass. The radius of the cylinder is 0.3L. The axes correspond to the distance of the test mass from the symmetry axis of the cylinder, and its height above one of the cylinders ends. The plot on the right is simply a zoom of the left plot into the intermediate heights.
Gravimeters
Gravimeters are instruments that measure the displacement of a test mass with respect to a non-inertial reference rigidly connected to the ground. The test mass is typically supported mechanically or magnetically (atom-interferometric gravimeters are an exception), which means that the test-mass response to gravity is altered with respect to a freely falling test mass. We will use Eq. (2) as a simplified response model. There are various possibilities to measure the displacement of a test mass. The most widespread displacement sensors are based on capacitive readout, as for example used in superconducting gravimeters (see Figure Figure22 and [96]). Sensitive displacement measurements are in principle also possible with optical readout systems; a method that is (necessarily) implemented in atom-interferometric gravimeters [137], and prototype seismometers [34] (we will explain the distinction between seismometers and gravimeters below). As will become clear in Section 2.4, optical readout is better suited for displacement measurements over long baselines, as required for the most sensitive gravity strain measurements, while the capacitive readout should be designed with the smallest possible distance between the test mass and the non-inertial reference [104].
An external file that holds a picture, illustration, etc.
Object name is 41114_2016_3_Fig2.jpg
Figure 2
Sketch of a levitated sphere serving as test mass in a superconducting gravimeter. Dashed lines indicate magnetic field lines. Coils are used for levitation and precise positioning of the sphere. Image reproduced with permission from [96]; copyright by Elsevier.
Let us take a closer look at the basic measurement scheme of a superconducting gravimeter shown in Figure Figure2.2. The central part is formed by a spherical superconducting shell that is levitated by superconducting coils. Superconductivity provides stability of the measurement, and also avoids some forms of noise (see [96] for details). In this gravimeter design, the lower coil is responsible mostly to balance the mean gravitational force acting on the sphere, while the upper coil modifies the magnetic gradient such that a certain “spring constant” of the magnetic levitation is realized. In other words, the current in the upper coil determines the resonance frequency in Eq. (2).
Capacitor plates are distributed around the sphere. Whenever a force acts on the sphere, the small signal produced in the capacitive readout is used to immediately cancel this force by a feedback coil. In this way, the sphere is kept at a constant location with respect to the external frame. This illustrates a common concept in all gravimeters. The displacement sensors can only respond to relative displacement between a test mass and a surrounding structure. If small gravity fluctuations are to be measured, then it is not sufficient to realize low-noise readout systems, but also vibrations of the surrounding structure forming the reference frame must be as small as possible. In general, as we will further explore in the coming sections, gravity fluctuations are increasingly dominant with decreasing frequency. At about 1 mHz, gravity acceleration associated with fluctuating seismic fields become comparable to seismic acceleration, and also atmospheric gravity noise starts to be significant [53]. At higher frequencies, seismic acceleration is much stronger than typical gravity fluctuations, which means that the gravimeter effectively operates as a seismometer. In summary, at sufficiently low frequencies, the gravimeter senses gravity accelerations of the test mass with respect to a relatively quiet reference, while at higher frequencies, the gravimeter senses seismic accelerations of the reference with respect to a test mass subject to relatively small gravity fluctuations. In superconducting gravimeters, the third important contribution to the response is caused by vertical motion ξ(t) of a levitated sphere against a static gravity gradient (see Section 2.1.4). As explained above, feedback control suppresses relative motion between sphere and gravimeter frame, which causes the sphere to move as if attached to the frame or ground. In the presence of a static gravity gradient ∂zgz, the motion of the sphere against this gradient leads to a change in gravity, which alters the feedback force (and therefore the recorded signal). The full contribution from gravitational, δa(t), and seismic, equation M45, accelerations can therefore be written
equation M4619
It is easy to verify, using Eqs. (2) and (3), that the relative amplitude of gravity and seismic fluctuations from the first two terms is independent of the test-mass support. Therefore, vertical seismic displacement of the reference frame must be considered fundamental noise of gravimeters and can only be avoided by choosing a quiet measurement site. Obviously, Eq. (19) is based on a simplified support model. One of the important design goals of the mechanical support is to minimize additional noise due to non-linearities and cross-coupling. As is explained further in Section 2.3, it is also not possible to suppress seismic noise in gravimeters by subtracting the disturbance using data from a collocated seismometer. Doing so inevitably turns the gravimeter into a gravity gradiometer.
Gravimeters target signals that typically lie well below 1 mHz. Mechanical or magnetic supports of test masses have resonance frequencies at best slightly below 10 mHz along horizontal directions, and typically above 0.1 Hz in the vertical direction [23, 174]4. Well below resonance frequency, the response function can be approximated as equation M47. At first, it may look as if the gravimeter should not be sensitive to very low-frequency fluctuations since the response becomes very weak. However, the strength of gravity fluctuations also strongly increases with decreasing frequency, which compensates the small response. It is clear though that if the resonance frequency was sufficiently high, then the response would become so weak that the gravity signal would not stand out above other instrumental noise anymore. The test-mass support would be too stiff. The sensitivity of the gravimeter depends on the resonance frequency of the support and the intrinsic instrumental noise. With respect to seismic noise, the stiffness of the support has no influence as explained before (the test mass can also fall freely as in atom interferometers).
For superconducting gravimeters of the Global Geodynamics Project (GGP) [52], the median spectra are shown in Figure Figure3.3. Between 0.1 mHz and 1 mHz, atmospheric gravity perturbations typically dominate, while instrumental noise is the largest contribution between 1 mHz and 5 mHz [96]. The smallest signal amplitudes that have been measured by integrating long-duration signals is about 10−12 m/s2. A detailed study of noise in superconducting gravimeters over a larger frequency range can be found in [145]. Note that in some cases, it is not fit to categorize seismic and gravity fluctuations as noise and signal. For example, Earth’s spherical normal modes coherently excite seismic and gravity fluctuations, and the individual contributions in Eq. (19) have to be understood only to accurately translate data into normal-mode amplitudes [55].
An external file that holds a picture, illustration, etc.
Object name is 41114_2016_3_Fig3.jpg
Figure 3
Median spectra of superconducting gravimeters of the GGP. Image reproduced with permission from [48]; copyright by APS.
Gravity gradiometers
It is not the purpose of this section to give a complete overview of the different gradiometer designs. Gradiometers find many practical applications, for example in navigation and resource exploration, often with the goal to measure static or slowly changing gravity gradients, which do not concern us here. For example, we will not discuss rotating gradiometers, and instead focus on gradiometers consisting of stationary test masses. While the former are ideally suited to measure static or slowly changing gravity gradients with high precision especially under noisy conditions, the latter design has advantages when measuring weak tidal fluctuations. In the following, we only refer to the stationary design. A gravity gradiometer measures the relative acceleration between two test masses each responding to fluctuations of the gravity field [102, 125]. The test masses have to be located close to each other so that the approximation in Eq. (4) holds. The proximity of the test masses is used here as the defining property of gradiometers. They are therefore a special type of gravity strainmeter (see Section 2.4), which denotes any type of instrument that measures relative gravitational acceleration (including the even more general concept of measuring space-time strain).
Gravity gradiometers can be realized in two versions. First, one can read out the position of two test masses with respect to the same rigid, non-inertial reference. The two channels, each of which can be considered a gravimeter, are subsequently subtracted. This scheme is for example realized in dual-sphere designs of superconducting gravity gradiometers [90] or in atom-interferometric gravity gradiometers [159].
It is schematically shown in Figure Figure4.4. Let us first consider the dual-sphere design of a superconducting gradiometer. If the reference is perfectly stiff, and if we assume as before that there are no cross-couplings between degrees of freedom and the response is linear, then the subtraction of the two gravity channels cancels all of the seismic noise, leaving only the instrumental noise and the differential gravity signal given by the second line of Eq. (4). Even in real setups, the reduction of seismic noise can be many orders of magnitude since the two spheres are close to each other, and the two readouts pick up (almost) the same seismic noise [125]. This does not mean though that gradiometers are necessarily more sensitive instruments to monitor gravity fields. A large part of the gravity signal (the common-mode part) is subtracted together with the seismic noise, and the challenge is now passed from finding a seismically quiet site to developing an instrument with lowest possible intrinsic noise.
An external file that holds a picture, illustration, etc.
Object name is 41114_2016_3_Fig4.jpg
Figure 4
Basic scheme of a gravity gradiometer for measurements along the vertical direction. Two test masses are supported by horizontal cantilevers (superconducting magnets, …). Acceleration of both test masses is measured against the same non-inertial reference frame, which is connected to the ground. Each measurement constitutes one gravimeter. Subtraction of the two channels yields a gravity gradiometer.
The atom-interferometric gradiometer differs in some important details from the superconducting gradiometer. The test masses are realized by ultracold atom clouds, which are (nearly) freely falling provided that magnetic shielding of the atoms is sufficient, and interaction between atoms can be neglected. Interactions of a pair of atom clouds with a laser beam constitute the basic gravity gradiometer scheme. Even though the test masses are freely falling, the readout is not generally immune to seismic noise [80, 18]. The laser beam interacting with the atom clouds originates from a source subject to seismic disturbances, and interacts with optics that require seismic isolation. Schemes have been proposed that could lead to a large reduction of seismic noise [178, 77], but their effectiveness has not been tested in experiments yet. Since the differential position (or tidal) measurement is performed using a laser beam, the natural application of atom-interferometer technology is as gravity strainmeter (as explained before, laser beams are favorable for differential position measurements over long baselines). Nonetheless, the technology is currently insufficiently developed to realize large-baseline experiments, and we can therefore focus on its application in gradiometry. Let us take a closer look at the response of atom-interferometric gradiometers to seismic noise. In atom-interferometric detectors (excluding the new schemes proposed in [178, 77]), one can show that seismic acceleration δα(ω) of the optics or laser source limits the sensitivity of a tidal measurement according to
equation M4820
where L is the separation of the two atom clouds, and is the speed of light. It should be emphasized that the seismic noise remains, even if all optics and the laser source are all linked to the same infinitely stiff frame. In addition to this noise term, other coupling mechanisms may play a role, which can however be suppressed by engineering efforts. The noise-reduction factor ωL/c needs to be compared with the common-mode suppression of seismic noise in superconducting gravity gradiometers, which depends on the stiffness of the instrument frame, and on contamination from cross coupling of degrees-of-freedom. While the seismic noise in Eq. (20) is a fundamental noise contribution in (conventional) atom-interferometric gradiometers, the noise suppression in superconducting gradiometers depends more strongly on the engineering effort (at least, we venture to claim that common-mode suppression achieved in current instrument designs is well below what is fundamentally possible).
To conclude this section, we discuss in more detail the connection between gravity gradiometers and seismically (actively or passively) isolated gravimeters. As we have explained in Section 2.2, the sensitivity limitation of gravimeters by seismic noise is independent of the mechanical support of the test mass (assuming an ideal, linear support). The main purpose of the mechanical support is to maximize the response of the test mass to gravity fluctuations, and thereby increase the signal with respect to instrumental noise other than seismic noise. Here we will explain that even a seismic isolation of the gravimeter cannot overcome this noise limitation, at least not without fundamentally changing its response to gravity fluctuations. Let us first consider the case of a passively seismically isolated gravimeter. For example, we can imagine that the gravimeter is suspended from the tip of a strong horizontal cantilever. The system can be modelled as two oscillators in a chain, with a light test mass m supported by a heavy mass M representing the gravimeter (reference) frame, which is itself supported from a point rigidly connected to Earth. The two supports are modelled as harmonic oscillators. As before, we neglect cross coupling between degrees of freedom. Linearizing the response of the gravimeter frame and test mass for small accelerations, and further neglecting terms proportional to m/M, one finds the gravimeter response to gravity fluctuations:
equation M4921
Here, ω1, γ1 are the resonance frequency and damping of the gravimeter support, while ω2, γ2 are the resonance frequency and damping of the test-mass support. The response and isolation functions R(·), S(·) are defined in Eqs. (2) and (3). Remember that Eq. (21) is obtained as a differential measurement of test-mass acceleration versus acceleration of the reference frame. Therefore, δg1(ω) denotes the gravity fluctuation at the center-of-mass of the gravimeter frame, and δg2(ω) at the test mass. An infinitely stiff gravimeter suspension, ω1 → ∞, yields R(ω; ω1, γ1) = 0, and the response turns into the form of the non-isolated gravimeter. The seismic isolation is determined by
equation M5022
We can summarize the last two equations as follows. At frequencies well above ω1, the seismically isolated gravimeter responds like a gravity gradiometer, and seismic noise is strongly suppressed. The deviation from the pure gradiometer response ∼ δg2(ω) − δg1(ω) is determined by the same function S(ω; ω1, γ1) that describes the seismic isolation. In other words, if the gravity gradient was negligible, then we ended up with the conventional gravimeter response, with signals suppressed by the seismic isolation function. Well below ω1, the seismically isolated gravimeter responds like a conventional gravimeter without seismic-noise reduction. If the centers of the masses m (test mass) and M (reference frame) coincide, and therefore δg1(ω) = δg2(ω), then the response is again like a conventional gravimeter, but this time suppressed by the isolation function S(ω; ω1, γ1).
Let us compare the passively isolated gravimeter with an actively isolated gravimeter. In active isolation, the idea is to place the gravimeter on a stiff platform whose orientation can be controlled by actuators. Without actuation, the platform simply follows local surface motion. There are two ways to realize an active isolation. One way is to place a seismometer next to the platform onto the ground, and use its data to subtract ground motion from the platform. The actuators cancel the seismic forces. This scheme is called feed-forward noise cancellation. Feed-forward cancellation of gravity noise is discussed at length in Section 7.1, which provides details on its implementation and limitations. The second possibility is to place the seismometer together with the gravimeter onto the platform, and to suppress seismic noise in a feedback configuration [4, 2]. In the following, we discuss the feed-forward technique as an example since it is easier to analyze (for example, feedback control can be unstable [4]). As before, we focus on gravity and seismic fluctuations. The seismometer’s intrinsic noise plays an important role in active isolation limiting its performance, but we are only interested in the modification of the gravimeter’s response. Since there is no fundamental difference in how a seismometer and a gravimeter respond to seismic and gravity fluctuations, we know from Section 2.2 that the seismometer output is proportional to δg1(ω) − δα(ω), i.e., using a single test mass for acceleration measurements, seismic and gravity perturbations contribute in the same way. A transfer function needs to be multiplied to the acceleration signals, which accounts for the mechanical support and possibly also electronic circuits involved in the seismometer readout. To cancel the seismic noise of the platform that carries the gravimeter, the effect of all transfer functions needs to be reversed by a matched feed-forward filter. The output of the filter is then equal to δg1(ω) − δα(ω) and is added to the motion of the platform using actuators cancelling the seismic noise and adding the seismometer’s gravity signal. In this case, the seismometer’s gravity signal takes the place of the seismic noise in Eq. (3). The complete gravity response of the actively isolated gravimeter then reads
equation M5123
The response is identical to a gravity gradiometer, where ω2, γ2 are the resonance frequency and damping of the gravimeter’s test-mass support. In reality, instrumental noise of the seismometer will limit the isolation performance and introduce additional noise into Eq. (23). Nonetheless, Eqs. (21) and (23) show that any form of seismic isolation turns a gravimeter into a gravity gradiometer at frequencies where seismic isolation is effective. For the passive seismic isolation, this means that the gravimeter responds like a gradiometer at frequencies well above the resonance frequency ω1 of the gravimeter support, while it behaves like a conventional gravimeter below ω1. From these results it is clear that the design of seismic isolations and the gravity response can in general not be treated independently. As we will see in Section 2.4 though, tidal measurements can profit strongly from seismic isolation especially when common-mode suppression of seismic noise like in gradiometers is insufficient or completely absent.
Gravity strainmeters
Gravity strain is an unusual concept in gravimetry that stems from our modern understanding of gravity in the framework of general relativity. From an observational point of view, it is not much different from elastic strain. Fluctuating gravity strain causes a change in distance between two freely falling test masses, while seismic or elastic strain causes a change in distance between two test masses bolted to an elastic medium. It should be emphasized though that we cannot always use this analogy to understand observations of gravity strain [106]. Fundamentally, gravity strain corresponds to a perturbation of the metric that determines the geometrical properties of spacetime [124]. We will briefly discuss GWs, before returning to a Newtonian description of gravity strain.
Gravitational waves are weak perturbations of spacetime propagating at the speed of light. Freely falling test masses change their distance in the field of a GW. When the length of the GW is much larger than the separation between the test masses, it is possible to interpret this change as if caused by a Newtonian force. We call this the long-wavelength regime. Since we are interested in the low-frequency response of gravity strainmeters throughout this article (i.e., frequencies well below 100 Hz), this condition is always fulfilled for Earth-bound experiments. The effect of a gravity-strain field equation M52 on a pair of test masses can then be represented as an equivalent Newtonian tidal field
equation M5324
Here, equation M54 is the relative acceleration between two freely falling test masses, L is the distance between them, and equation M55 is the unit vector pointing from one to the other test mass, and equation M56 its transpose. As can be seen, the gravity-strain field is represented by a 3 × 3 tensor. It contains the space-components of a 4-dimensional metric perturbation of spacetime, and determines all properties of GWs5. Note that the strain amplitude h in Eq. (24) needs to be multiplied by 2 to obtain the corresponding amplitude of the metric perturbation (e.g., the GW amplitude). Throughout this article, we define gravity strain as h = ΔL/L, while the effect of a GW with amplitude aGW on the separation of two test mass is determined by aGW = 2ΔL/L.
The strain field of a GW takes the form of a quadrupole oscillation with two possible polarizations commonly denoted × (cross)-polarization and +(plus)-polarization. The arrows in Figure Figure55 indicate the lines of the equivalent tidal field of Eq. (24).
An external file that holds a picture, illustration, etc.
Object name is 41114_2016_3_Fig5.jpg
Figure 5
Polarizations of a gravitational wave.
Consequently, to (directly) observe GWs, one can follow two possible schemes: (1) the conventional method, which is a measurement of the relative displacement of suspended test masses typically carried out along two perpendicular baselines (arms); and (2) measurement of the relative rotation between two suspended bars. Figure Figure66 illustrates the two cases. In either case, the response of a gravity strainmeter is obtained by projecting the gravity strain tensor onto a combination of two unit vectors, equation M57 and equation M58, that characterize the orientation of the detector, such as the directions of two bars in a rotational gravity strain meter, or of two arms of a conventional gravity strain meter. This requires us to define two different gravity strain projections. The projection for the rotational strain measurement is given by
equation M5925
where the subscript × indicates that the detector responds to the ×-polarization assuming that the x, y-axes (see Figure Figure5)5) are oriented along two perpendicular bars. The vectors equation M60 and equation M61 are rotated counter-clockwise by 90° with respect to equation M62 and equation M63. In the case of perpendicular bars equation M64 and equation M65. The corresponding projection for the conventional gravity strain meter reads
equation M6626
The subscript + indicates that the detector responds to the +-polarization provided that the x, y-axes are oriented along two perpendicular baselines (arms) of the detector. The two schemes are shown in Figure Figure6.6. The most sensitive GW detectors are based on the conventional method, and distance between test masses is measured by means of laser interferometry. The LIGO and Virgo detectors have achieved strain sensitivities of better than 10−22 Hz−1/2 between about 50 Hz and 1000 Hz in past science runs and are currently being commissioned in their advanced configurations [91, 7]. The rotational scheme is realized in torsion-bar antennas, which are considered as possible technology for sub-Hz GW detection [155, 69]. However, with achieved strain sensitivity of about 10−8 Hz−1/2 near 0.1 Hz, the torsion-bar detectors are far from the sensitivity we expect to be necessary for GW detection [88].
An external file that holds a picture, illustration, etc.
Object name is 41114_2016_3_Fig6.jpg
Figure 6
Sketches of the relative rotational and displacement measurement schemes.
Let us now return to the discussion of the previous sections on the role of seismic isolation and its impact on gravity response. Gravity strainmeters profit from seismic isolation more than gravimeters or gravity gradiometers. We have shown in Section 2.2 that seismically isolated gravimeters are effectively gravity gradiometers. So in this case, seismic isolation changes the response of the instrument in a fundamental way, and it does not make sense to talk of seismically isolated gravimeters. Seismic isolation could in principle be beneficial for gravity gradiometers (i.e., the acceleration of two test masses is measured with respect to a common rigid, seismically isolated reference frame), but the common-mode rejection of seismic noise (and gravity signals) due to the differential readout is typically so high that other instrumental noise becomes dominant. So it is possible that some gradiometers would profit from seismic isolation, but it is not generally true. Let us now consider the case of a gravity strainmeter. As explained in Section 2.3, we distinguish gradiometers and strainmeters by the distance of their test masses. For example, the distance of the LIGO or Virgo test masses is 4 km and 3 km respectively. Seismic noise and terrestrial gravity fluctuations are insignificantly correlated between the two test masses within the detectors’ most sensitive frequency band (above 10 Hz). Therefore, the approximation in Eq. (4) does not apply. Certainly, the distinction between gravity gradiometers and strainmeters remains somewhat arbitrary since at any frequency the approximation in Eq. (4) can hold for one type of gravity fluctuation, while it does not hold for another. Let us adopt a more practical definition at this point. Whenever the design of the instrument places the test masses as distant as possible from each other given current technology, then we call such an instrument strainmeter. In the following, we will discuss seismic isolation and gravity response for three strainmeter designs, the laser-interferometric, atom-interferometric, and superconducting strainmeters. It should be emphasized that the atom-interferometric and superconducting concepts are still in the beginning of their development and have not been realized yet with scientifically interesting sensitivities.
Laser-interferometric strainmeters The most sensitive gravity strainmeters, namely the large-scale GW detectors, use laser interferometry to read out the relative displacement between mirror pairs forming the test masses. Each test mass in these detectors is suspended from a seismically isolated platform, with the suspension itself providing additional seismic isolation. Section 2.1.1 introduced a simplified response and isolation model based on a harmonic oscillator characterized by a resonance frequency ω0 and viscous damping γ6. In a multi-stage isolation and suspension system as realized in GW detectors (see for example [37, 121]), coupling between multiple oscillators cannot be neglected, and is fundamental to the seismic isolation performance, but the basic features can still be explained with the simplified isolation and response model of Eqs. (2) and (3). The signal output of the interferometer is proportional to the relative displacement between test masses. Since seismic noise is approximately uncorrelated between two distant test masses, the differential measurement itself cannot reject seismic noise as in gravity gradiometers. Without seismic isolation, the dominant signal would be seismic strain, i.e., the distance change between test masses due to elastic deformation of the ground, with a value of about 10−15 Hz−1/2 at 50 Hz (assuming kilometer-scale arm lengths). At the same time, without seismically isolated test masses, the gravity signal can only come from the ground response to gravity fluctuations as described in Section 2.1.3, and from the Shapiro time delay as described in Section 2.1.2.
This is an all-female compound of hawkers and kayayo (market porters). They're mostly new arrivals from rural areas.
Check this; there are 10 rooms in this compound, five women per room, belongings hanging in bags around the walls. Each paying (back in 2000) ¢5000 per night to live there (total per month: ¢7.5m): they generate all this money from selling or carrying.
The things they sell varies from day today – chewing sticks, oranges, avocado pears, rice and stew – all manner of things. This means that whatever crops come into Accra from any of the ten regions, these people instantly distribute them to the four corners of Accra at the lowest possible price.
By my calculations as a group, these women generate more cash per square metre that we do in our, one family per compound, high rent areas of Accra. Amazing Ghanaians...