1. 1 October 2014

    25 notes

    Reblogged from
    nuna80

    nuna80:

James Joyce, Ezra Pound, John Quinn and Ford Madox Ford in Paris, Autumn 1923*

    nuna80:

    James Joyce, Ezra Pound, John Quinn and Ford Madox Ford in Paris, Autumn 1923*

  2. Jesuit Science since the 16th Century

    wallifaction:

    image

    (image source)

    The Society of Jesus was officially recognized by the Catholic Church on this day in 1540, when Pope Paul III granted approval to the order in a papal bull. Since the days when they grappled with the Copernican question, the Jesuits have maintained an important place in the history of science. Many Jesuits resisted the move away from geocentric cosmology, and some contributed to Galileo’s trouble with the Roman Inquisition (including Christoph Scheiner, whom we’ll meet below), but it would be a mistake simply to characterize the Jesuits as obscurantists. Indeed, the Jesuits have often served as the Catholic Church’s advisors on matters of natural philosophy and science. Furthermore, historians have written at length about the exchange of natural knowledge between the Jesuits and the Eastern cultures the encountered during Catholic missions. Below is a small sample of the members of the Jesuit order who have contributed to natural philosophy or science since 1540:

    • Christopher Clavius (1538-1612). A Jesuit astronomer born in Germany, Clavius was never convinced to adopt Copernicus’ sun-centred cosmos, but he was nevertheless a skilled astronomer. He had a hand in establishing the Gregorian calendar which several Catholic countries adopted in 1582.
    • Christoph Scheiner (1573-1650). Another German Jesuit, Scheiner was one of the first astronomers to observe sunspots. This was the foundation of his animosity toward Galileo; the two astronomers argued over who first observed them and whether they were imperfections on or near the surface of the Sun (Galileo) or shadows cast by tiny stars (Scheiner).
    • Athanasius Kircher (1602-1680). Frequently identified by historians as a polymath, this German Jesuit contributed to fields of study ranging from medicine to geography to the study of Asian cultures. Although he never went there himself, Kircher wrote a sizable volume on China, drawing together the observations of fellow Jesuits who had gone on missions there. His China illustrata (1670) is pictured below.
    • Christian Mayer (1719-1783). This Czech Jesuit is remembered for his catalogue of binary stars (that is, systems in which two stars revolve around a common centre). He served as Court Astronomer for the Elector Palatinate in Mannheim, where he had an observatory built.
    • Pierre Teilhard de Chardin (1881-1955). A French palaeontologist, geologist and Jesuit, Teilhard de Chardin is one of the best known Jesuit scientists of the twentieth century. He was a member of the team that discovered Peking Man, a famous Homo erectus skeleton discovered in China in the 1920s. Teilhard de Chardin is also known for arguing that Darwinism can be fully reconciled with Christian theology.
    • Georges Lemaître (1894-1966). Pictured above, Lemaître is the astronomer, physicist and Jesuit priest who’s often credited with first proposing the Big Bang Theory. Lemaître didn’t coin this term, but he did espouse the basic ideas of the theory, namely, that the universe began at a particular time and expanded outward from a single point. Canadians will be interested to know that Lemaître was developing his ideas while in Toronto for a meeting of the British Association for the Advancement of Science in 1924.
    • Bienvenido Nebres (1940-). This Jesuit priest, mathematician and pedagogue is a prominent figure in Filipino science. In 2011 he was granted the Philippines’ highest honour in science, the title of National Scientist, for his work to reform science education. During the 1970s he had papers published on infinitary mathematics while serving as the first president of the Mathematical Society of the Philippines.
    • Michael C. McFarland (1948-). An American Jesuit, McFarland is also a computer scientist and the former president of the College of the Holy Cross. In the 1990s, McFarland published technical articles on digital systems, and also wrote the ethical issues associated computer technology, anticipating the ongoing concerns about this subject in the twenty-first century.

    Who’s your favourite Jesuit natural philosopher, mathematician or scientist? Answer in the comments below.

    image

    (image source)

  3. (Source: twistedxats)

  4. I learned that identical emotions do not spring up in the hearts of all men simultaneously, by a pre-established order. Later on I discovered that, whenever I had read for too long and was in a mood for conversation, the friend to whom I would be burning to say something would at that moment have finished indulging himself in the delights of conversation, and wanted nothing now but to be left to read undisturbed.

    — Marcel Proust, Swann’s Way (via talesofpassingtime)

  5. Ludwig Wittgenstein is my hair idol. 

  6. He awoke each morning with the desire to do right, to be a good and meaningful person, to be, as simple as it sounded and as impossible as it actually was, happy. And during the course of each day his heart would descend from his chest into his stomach. By early afternoon he was overcome by the feeling that nothing was right, or nothing was right for him, and by the desire to be alone. By evening he was fulfilled: alone in the magnitude of his grief, alone in his aimless guilt, alone even in his loneliness. I am not sad, he would repeat to himself over and over, I am not sad. As if he might one day convince himself. Or fool himself. Or convince others—the only thing worse than being sad is for others to know that you are sad. I am not sad. I am not sad. Because his life had unlimited potential for happiness, insofar as it was an empty white room. He would fall asleep with his heart at the foot of his bed, like some domesticated animal that was no part of him at all. And each morning he would wake with it again in the cupboard of his rib cage, having become a little heavier, a little weaker, but still pumping. And by the mid afternoon he was again overcome with the desire to be somewhere else, someone else, someone else somewhere else. I am not sad.

    — Jonathan Safran Foer, Everything Is Illuminated (via fy-perspectives)

  7. fyp-philosophy:

    UNSOLVED PROBLEMS IN PHILOSOPHY PART 7 OF 8

  8. fyp-philosophy:

UNSOLVED PROBLEMS IN PHILOSOPHY PART 5 OF 8

    fyp-philosophy:

    UNSOLVED PROBLEMS IN PHILOSOPHY PART 5 OF 8

  9. 30 September 2014

    20 notes

    Reblogged from
    c86

    c86:

André Breton - Architecture Marseille (Collective drawing), 1940

    c86:

    André Breton - Architecture Marseille (Collective drawing), 1940

  10. scinote:

Question:
How did we get computers to have a “memory”? I mean, if computers are just made up of chips of metal and electricity, how can they store information?
Asked by anonymous

Answer:
Computers have been around for quite some time— perhaps not in the way we typically think of them, but they have been there. At first, it was easy to conceive a mechanical way to store information, the problem came when we began demanding more out of computers, switching into electronic and magnetic components.
The main principle is storing information in one of two states: either 1 or 0. In terms of electrical components, this is simple: you either have a component in the “on” state or “off” state. The ways to process that information, save it, optimize the process, and make it fully automated vary immensely. 
Back in the good old days of computing, memory worked through purely mechanical means. How exactly did we achieve this? Well, one fairly well known method was using punched cards, or Hollerith cards. These were pieces of stiff paper with holes in them. This holes were punched in predefined positions, allowing for early computers— and I mean 1800s computers, not your grandma’s computers— to be able to process data and run automated processes. Note how the concept is fundamentally the same as our system: you still have a set of two distinct states. Several other mechanical ways of accessing and storing information also arose during the early periods of computing, methods which included valves and gears, but these processes were still slow and tedious.
Eventually, we began to need faster, more efficient, and less bulky ways for storing and accessing information.
The first attempt was using electrical valves, which are basically circuits wired so that one valve can be turned on and the other one off. This posed several problems in terms of space efficiency and was incredibly expensive, not to mention highly inefficient in terms of energy consumption. Another concern was how to make these system “non-volatile”, so that you could restart your machine and still have your information there.
Another idea was to place a long tube of mercury with one end on a loudspeaker. Ideally, you would have waves travel through the tube and would be able to detect pulses at the end of the tube. The problem was that you had to constantly circulate these waves, and you could only detect the pulse for a very brief period, right when the wave was “bouncing back”.
Eventually, we got to the point where we managed to create “cores”, which are basically magnetic rings threaded on wires. Bits of information were stored using the direction of the magnetization of the cores. The first cores were huge— storing 1Mbyte required the space of a small car, but we got around to making them smaller and more efficient.
To further optimize our computers, we shifted from magnetized cores toward electronic components. Namely, we’re using transistor chains, which apply a precise voltage to the circuit to produce a pattern of 1’s and 0’s depending on whether or not the current is conducted.
Nowadays, chances are your computer has either a Hard Disk Drive (HDD) or a Solid State Drive (SSD). HDDs are the most common way of storing information on your average computer. The’re basically metal platters with a magnetic coating that stores your information. The platters are spinning rapidly in an enclosed space from which a read/write arm accesses the data. SSDs are a bit more of a novelty for your average PC user, but are faster and more reliable. Instead of having your data stored in a magnetic coating, data in a SSD is stored in an interconnected flash memory chip, much like your USB. Since they do not rely on magnetic coatings, nor do they depend on mechanical parts (like moving arms), SSDs are more reliable and faster, but the drawback is that they are, at least for now, more expensive than HDDs.
In the end, the history of computers revolves around the same central theme: how do we make information readily available and easy to process? Over time, we’ve been demanding more and more out of our computers. As we do so, we of course face increasingly difficult challenges and are forced (or encouraged, if you like) to reinvent our ways in order to keep up with the demand for power and efficiency.
So how did we do it? We say: ingenuity, that’s how.

Answered by Demian L, Expert Leader.
Edited by Margaret G.

    scinote:

    Question:

    How did we get computers to have a “memory”? I mean, if computers are just made up of chips of metal and electricity, how can they store information?

    Asked by anonymous

    Answer:

    Computers have been around for quite some time— perhaps not in the way we typically think of them, but they have been there. At first, it was easy to conceive a mechanical way to store information, the problem came when we began demanding more out of computers, switching into electronic and magnetic components.

    The main principle is storing information in one of two states: either 1 or 0. In terms of electrical components, this is simple: you either have a component in the “on” state or “off” state. The ways to process that information, save it, optimize the process, and make it fully automated vary immensely. 

    Back in the good old days of computing, memory worked through purely mechanical means. How exactly did we achieve this? Well, one fairly well known method was using punched cards, or Hollerith cards. These were pieces of stiff paper with holes in them. This holes were punched in predefined positions, allowing for early computers— and I mean 1800s computers, not your grandma’s computers— to be able to process data and run automated processes. Note how the concept is fundamentally the same as our system: you still have a set of two distinct states. Several other mechanical ways of accessing and storing information also arose during the early periods of computing, methods which included valves and gears, but these processes were still slow and tedious.

    Eventually, we began to need faster, more efficient, and less bulky ways for storing and accessing information.

    The first attempt was using electrical valves, which are basically circuits wired so that one valve can be turned on and the other one off. This posed several problems in terms of space efficiency and was incredibly expensive, not to mention highly inefficient in terms of energy consumption. Another concern was how to make these system “non-volatile”, so that you could restart your machine and still have your information there.

    Another idea was to place a long tube of mercury with one end on a loudspeaker. Ideally, you would have waves travel through the tube and would be able to detect pulses at the end of the tube. The problem was that you had to constantly circulate these waves, and you could only detect the pulse for a very brief period, right when the wave was “bouncing back”.

    Eventually, we got to the point where we managed to create “cores”, which are basically magnetic rings threaded on wires. Bits of information were stored using the direction of the magnetization of the cores. The first cores were huge— storing 1Mbyte required the space of a small car, but we got around to making them smaller and more efficient.

    To further optimize our computers, we shifted from magnetized cores toward electronic components. Namely, we’re using transistor chains, which apply a precise voltage to the circuit to produce a pattern of 1’s and 0’s depending on whether or not the current is conducted.

    Nowadays, chances are your computer has either a Hard Disk Drive (HDD) or a Solid State Drive (SSD). HDDs are the most common way of storing information on your average computer. The’re basically metal platters with a magnetic coating that stores your information. The platters are spinning rapidly in an enclosed space from which a read/write arm accesses the data. SSDs are a bit more of a novelty for your average PC user, but are faster and more reliable. Instead of having your data stored in a magnetic coating, data in a SSD is stored in an interconnected flash memory chip, much like your USB. Since they do not rely on magnetic coatings, nor do they depend on mechanical parts (like moving arms), SSDs are more reliable and faster, but the drawback is that they are, at least for now, more expensive than HDDs.

    In the end, the history of computers revolves around the same central theme: how do we make information readily available and easy to process? Over time, we’ve been demanding more and more out of our computers. As we do so, we of course face increasingly difficult challenges and are forced (or encouraged, if you like) to reinvent our ways in order to keep up with the demand for power and efficiency.

    So how did we do it? We say: ingenuity, that’s how.

    Answered by Demian L, Expert Leader.

    Edited by Margaret G.

  11. turkeyinacan:

    shoutout to people working weekends and overnights and overtime, people working in hospitality and retail and food service, who are sacrificing time with their loved ones, so fuckers with weekday desk jobs get to live comfortably with the amenities we provide while simultaneously shitting all over us for not getting “real jobs”

  12. (Source: laljipota)

  13. motherboardtv:

Canada Is Ignoring Netflix and Google During Broadcast Reform Hearings

    motherboardtv:

    Canada Is Ignoring Netflix and Google During Broadcast Reform Hearings

  14. If self-similarity proves to be a built-in property of the universe, then perhaps sleep is, after all, a form of death— repeated at a daily frequency instead of a generational one. And we go back and forth, as Pythagoreans suspected, in and out of death as we do dreams, but much more slowly….

    — Thomas Pynchon — Against the Day (via ishii-hospital)