Tuesday, September 5, 2017

Electoral College

Those who support maintaining the institution of the Electoral College as it is today defend it on several grounds. These are primarily that the electoral college

  • ensures that a president must win by appealing to citizens across demographics; merely winning those votes from urban centers will not be enough if they ignore the concerns of those in more rural areas and states.
  • serves as a check on the power of more populous states and prevents a "tyranny of the majority". 
  • maintains the integrity of the United States as a representative republic. 
  • preserves the intentions of the Founding Fathers with respect to the function of the government
  • prevents the election from being decided by only a few states

Citizens of different regions may differ significantly in their political concerns and be impacted differently by legislation. If the president only needed to win the popular vote, the argument goes, then he would be able to ignore or sacrifice the considerations of these less populous rural areas for the sake of the more populous areas. The electoral college ensures that the president has widespread acceptance across these demographics. 

My question with respect to this argument is: why limit our concern to only geographic demographics? Different ethnic groups may also differ significantly in their political concerns. If our worry is that a purely popular vote will allow the concerns of some demographics to overshadow others why shouldn't we also require a president to have widespread appeal across different ethnic demographics? Or religious ones? Educational, economic, or age-related ones? Limiting it to geographic concerns seems entirely arbitrary.

Does the EC serve as a check on the power of more populous states? Maybe. But there is already a check on the power of states by virtue of the Senate where all states are afforded equal representation. The very fact that the House is (ostensibly) proportional to population is in contrast to this. Also in this case the EC isn't merely a check on power, but in fact allows the minority party to actively do something, namely elect a president. That goes beyond just a check on the more populous states. 

We are a representative republic by virtue of our legislatures where bills are proposed and voted on by our representatives. None of this however has to do with how our representatives themselves are elected, which is in fact by popular vote. How would electing our highest representative in the same manner somehow undermine or contradict our government being a representative republic? I don't see it.

Reading The Federalist Papers as well as Madison's comments of the Federal Convention I fail to see how the implementation of the EC had absolutely anything to do with protecting rural areas from being eclipsed by urban areas. (this would hardly be a concern seeing as how at the time well over 90% of the population was rural.) It in every way appears that Madison saw it as an unfortunate concession for the sake of ratifying the constitution which was seen as preferable to falling back to the Articles of Confederation. While Hamilton supported the Electoral College, it wasn't for the sake of protecting smaller states and he still desired that it be proportional to population.

Often a worry that "only a few states will decide the election" is expressed when the idea of a popular vote comes up. Two points. 1. This is already the case with the EC. Rather than having a few populous states deciding the election it simply shifts to a few swing states deciding the election and candidates spending a disproportionate time campaigning there. The EC does nothing to alleviate this worry. 2. If we change to a purely popular vote then it is not "states" that decide anything; it is the people that make up those sates that decide. Those state lines are completely meaningless with respect to a popular vote. One may as well draw an arbitrary line around a densely populated area of Wyoming and one around a less dense area of California and make the argument "why should this city in Wyoming have more voting power than this city in California?". 

As it stands I think I support a national popular vote or at least expanding the House by the Wyoming rule to more accurately reflect the population. This better encompasses the equality among voters and holds true to the principle of one man one vote while the winner takes all approach of the current makeup of the EC serves to reduce voter turnout and make the value of a vote dependent on where it is cast.



Sunday, September 3, 2017

Basic Big O Notation Rough Draft

Measuring the "time" it takes for a machine to compute a particular algorithm brings with it several difficulties. It's not as simple as simply passing an algorithm to a computer and clocking the time it takes for the computer to complete the instructions. Different hardware and software may implement algorithms and operations in distinct matters; computers and structures differ in their use of cache, differ in the number and speed of their processors and the size of their ram, their use of ssd vs hdd etc. This makes it impractical to truly understand the speed of an algorithm itself as distinct from the architecture implementing it. Therefore, to measure the speed of an algorithm we refer to the number of operations, using Big O notation, rather than a simplistic measure of actual clock speed.

Imagine we have an array of elements that we wish to print. To do this we iterate over the array one element at a time, adding the next integer to the previous total. If we were to double the number of elements in the array we would double the number of operations we have to perform. This is known as linear speed (due to the fact that the number of operations increase directly with the size of the input) and in Big O notation is signified by O(n). Contrast this with adding two random elements in the array. Due to the fact that arrays are preallocated, access to it's elements is random and an index can be accessed directly, regardless of the total number of elements in the array. As such, all things being equal, it would take no more time to implement this sum if we were to double the size of the array. This time is known as constant speed(since the time taken is independent of the size of the input) and is signified by O(1).

There are of course all kinds of time complexities, such as exponential and poly-logarithmic times, and these follow naturally from what is defined here.

Monday, January 9, 2017

Divine foreknowledge and free will


The classical interpretation of omniscience raises a few problems. One of the strongest is the question as to how one can reconcile a concept of omniscience where God knows the truth value of contingent propositions prior to their instantiation in the world.  If God infallibly knows that tomorrow I will perform some action, then I can't possibly will anything in contradiction to that knowledge. But then in what sense do I possess free will? One defense that can be raised is to claim that mere knowledge of a future contingent does not itself cause the event. A common example being that I can have true knowledge that the sun will rise tomorrow, but clearly my knowing that fact does not have a causal relation to the sun rising, as the argument goes this is also the case for god. However, I don't see how this actually responds to the argument. If I truly know that the sun will rise tomorrow, it is only because the sun does not in fact have free will.  The argument is about future contingents having a determinate truth value independent or prior to the proximate agents, not with what ultimately determines those truth values (they may even be brute facts for the sake of the argument). If it is true today that I will perform some action tomorrow then that proposition has a truth value, and what determines that value cannot be me, as the proposition is determinate prior to the my activity, in fact prior to my very existence. If an agent isn't the primary cause of the truth about its actions, I don't see what a classical interpretation of free will could mean.

Another defense is to claim that god, being timeless, sees the entire universe not as a sequence of temporally occurring events, but in one single "eternal now", but again I fail to see how this answers the objection. The argument is about logical priority, not temporal priority. If god and his knowledge are logically prior to the universe then the objection remains the same. If the actualization of some event is instead prior to god's knowledge of that event then god's knowledge, and by extension God, are contingent. Moreover, if god only knows our actions because we perform them, than this knowledge would be entirely useless for prophecy or providence, as he can only know an event after the fact.


Sunday, April 10, 2016

Contingency of god in classical theism

According to classical theism god is ontologically simple. This divine simplicity implies that  attributes which created things might interpret as distinct are in reality equivalent within god; that is god's intellect is god's goodness is god's will is god's essence etc. This gives rise to several difficulties but I'll just speak of one in this post. If god's will is in fact the same as god's essence, and god's will is free, then this would seem to imply that god is contingent. To see this just imagine another world, one that is exactly like ours only with one more electron in it. Since in that world god would've willed something different than in ours, and since god's will is equivalent to his essence, then the god who willed to create that world would have a different essence than the god who willed this world. Since his essence differs across worlds, different gods exist in different worlds, and thus his existence cannot be necessary. Aquinas attempts to answer this by making a distinction between absolute necessity and suppositional necessity; if his argument succeeds it would at most prove that there is no contradiction between god's will being free and god's essence being necessary, however it fails to answer the question as to how the gods of these different created worlds can all be one and the same.

Virtual Particles

I must admit that i've always been confused by virtual particles. Are virtual particles "real"? Is the notion of virtual particles scientific? I am most certainly way off here but here are some questions I have:

1. What foundation is there for speaking about virtual particles? That is, in quantum field theory/second quantization states are equipped with creation/annihilation operators, however these creation/annihilation operators don't exist in relation to the internal lines of  feynman diagrams, so what is the underlying formalism ?

2. In what sense can the intermediate terms of a series be said to be "real"? One needs to renormalize just to get an actual physical quantity to be measured at which point the terms corresponding to these supposed "virtual particles" no longer exist, no?

3. According to this thread non-perturbative approaches to qft such as lattice gauge theory don't give rise to virtual particles. Why do physicists say that these virtual particles have an independent existence if they can only be spoken about if one is using a perturbative approach to calculating scattering?

4. If virtual particles can come into existence due to the uncertainty principle(leaving aside the question as to whether time can be introduced as an operator in quantum mechanics and the fact that there is a ground state) then they are by definition not observable/measurable, in which case how can descriptions of  them be scientific?

I have a few other questions about virtual particles but these are the ones i'm having the most difficulty understanding.