Hear Ye! Since 1998.

Archived Posts for April 2023

30
Apr 23
Sun

Weekly Report: April 30, 2023

Observations

  • On Friday evening, we arrived home after dinner to find that one of the large Modesto Ash trees in our front yard had split in two. The split half was being precariously held up by the branch of another tree, and that branch was the only thing preventing everything from collapsing. Nonetheless, the whole tree was now tilting like the Leaning Tower of Pisa, and hanging over the sidewalk like a gigantic Sword of Damocles for people taking their evening walks.
  • I quickly printed out a couple warning signs and stuck them to the tree. But now what?
  • We had actually realized the tree was in the process of dying a couple months ago and called an arborist. After tapping the trunk and several branches with a pen, he recommended we take the tree down, but there wasn’t any rush as the winter storm season had passed. We signed a contract with them ($1,500 for the removal, plus more for stump grinding) a couple weeks ago, but then they strangely ghosted us, despite multiple follow ups. A call to their emergency line went unanswered.
  • In any event, after Googling, it turns out that even when a tree of this size is clearly dead, you need a tree permit from the city to remove it. Tree permits take about 10 days to process, unless it’s an emergency. Two problems: the permitting office was closed since the weekend had basically started, and the tree permit required an arborist’s report for submission.
  • We didn’t feel comfortable waiting until Monday, and it was a toss up who would collapse first over the weekend: First Republic Bank, the Warriors, or our tree.
  • Susanne got some quotes from other arborists through Yelp. We were surprised to get several immediate responses. Some were willing to come out on Saturday to give us a quote. One asked for $2,000 off the bat (ouch).
  • I called the city works department, which was closed, but a recorded message pointed me to an emergency line for issues like burst water mains and leaking sewer lines. The person who answered was as helpful as they could be — they didn’t know who would handle our issue, but took down my contact details and promised to get back to me.
  • Not knowing how long that would take, I then turned to Nextdoor and a few helpful neighbors suggested I call the non-emergency line for the police. (I later read in the city ordinances that the police can authorize a tree being removed if it’s a safety threat.) After calling, we were told someone else had apparently reported the issue already (it wasn’t our earlier call to the works department), and they would be sending someone around to inspect.
  • About an hour later, we got a knock on the door from a city worker. We took him to the tree. He shone his torch on the trunk, “Hmm, yeah I can see the split.” Then we directed him to look into the canopy where the other tree was supporting its half-fallen compatriot like a drunken sailor. “Oh. Oh yeah. That’s not safe. That’s not safe. I didn’t even see that. This is a tree on your property, but we’re going to call someone out to take some of these branches down because it’s not safe. We can cut it up but we can’t haul the debris away for you.”
  • Not a problem with us! We had already planned to pay for that, and now the city was telling us they were going to do it for free. We asked how long that would take, expecting it to be some time the next day, but were amazed to learn a crew would be out within the hour. (He was actually apologetic, “Well the guys have got to go and pick up the equipment they need first.”)
  • Sure enough, a guy with a cherry picker and a chainsaw showed up and spent the next couple of hours cutting down the tree, one chunk at a time. He was done just before midnight and we were super appreciative.
  • Because bureaucracy is bureaucracy, he told us we still needed to file for an emergency tree permit for the removal. “However, it shouldn’t be a problem because this was an emergency situation and my boss is the person who approves those permits.”

Further Observations

  • First Republic Bank is going the way of Silicon Valley Bank. A lot of the VCs that were extremely vocal on Twitter about saving Silicon Valley Bank depositors when the bank was melting down have been strangely silent. “The government has to save all the depositors or the regional banking sector is going to die!” But this weekend, not a peep from the Twitterati. Not hard to figure out why.
  • H-1B visas are the most common form of visa for international students looking to stay in the U.S. to work. If you have a master’s degree, there are only 85,000 available each year. This April there were over 780,000 applications. Yikes. When I applied for my H-1B back in 2010, the silver lining of being in the depths of the great financial crisis was that there were more than enough H-1Bs to go around.

Articles

Charts, Images & Videos

If you think you’re having a hard week, just be glad you’re not doing this:

On Twitter

  9:00pm  •  Life  •  Tweet This  •  Add a comment  • 
23
Apr 23
Sun

Weekly Report: April 23, 2023

Observations

  • This week was the series finale of Star Trek: Picard. I’ve been a Trekkie for a long time and I have blog entries dating back from as long as 25 years ago (!!) writing about Trek. Given the special nature of the season, it felt appropriate to write a few more words about it. Spoilers ahead.
  • The last time we saw the whole TNG cast together was in Star Trek: Nemesis. Unlike the original series cast’s swan song in Star Trek VI, the TNG cast never had a proper send off, as poor results at the box office for the decidedly sub-standard Nemesis saw their time on the screen come to an abrupt end.
  • Now, a little over 20 years later, the whole senior TNG cast — naturally playing characters that are also 20+ years older — has been reunited in a fitting end and satisfying closure to the TNG crew’s journeys together.
  • The previous two seasons of Picard were forgettable and the writers of Season 3 did their best to do so. Season 3 was much better.
  • I found Season 3 deeply nostalgic, and the writers made sure to throw in tons of callbacks and easter eggs to stoke that nostalgia. So much fan service, and I loved it. It would have been nice to see more cameos from DS9 and Voyager folks (apparently they ran out of budget), but they did include an oblique reference to Odo, who was played by the late Rene Auberjonois.
  • When you see someone continually over the course of years, you don’t notice the aging process so much. But the last time we saw the crew, they were much younger. So suddenly seeing them all together again on the screen, 20 years later, was a stark and constant reminder of the passage of time. For me, it brought back memories of the times I spent watching and discussing episodes of TNG and DS9 with a couple of my best friends during high school and university (screening at 11pm on Channel 9 during a weeknight). Occasionally we would scoop up whatever VHS tapes we could get our hands on at the local Video Ezy and watch episodes late into the night. And it was also a sobering reminder that life has moved on — with the onset of middle age, we now each have our own “next generation” to bring into the world.
  • The closing scenes were great — Picard delivering lines from Julius Caesar and a poker game to mirror the end of All Good Things… Apparently they filmed the cast casually playing poker for about 45 minutes, took an excerpt from it, and rolled it over the closing credits. (Oh, and Q is back.)
  • Anyway, much feels, and time to move on…

Articles

Movies & TV

Charts, Images & Videos

On Twitter

  9:00pm  •  Life  •  Tweet This  •  Add a comment  • 
16
Apr 23
Sun

Weekly Report: April 16, 2023

Observations

TikTok

I was asked for my thoughts on TikTok. My thoughts are pretty simple.

The potential for social media to do harm in various forms has been recognized for many years now. Ad-driven social media business models rely on engagement and attention, and their algorithms optimize for that. That often means displaying content that is provocative or even incendiary. It leads users down rabbit holes and into echo chambers. It influences emotions and perspectives and messes with adults psychologically. For kids, it’s even worse. I believe it can really mess up kids during their most formative years — especially if they are left with it for hours unsupervised, each day. (Gen Z believes it too. See “Do the Kids Think They’re Alright?” in the Articles section below.)

TikTok has incredible engagement and reach. People spend hours on it each day. It has over 100 million users in the U.S. As a business, I wish I could invest in it. As a product, I’m not a user. But some of the content on there and the creativity on display is pretty incredible.

Overlaid on top of all of this is the fact that TikTok is owned by ByteDance, a $300B Chinese company. Chinese companies (and particularly tech companies, as of late) are vulnerable to interference by the Chinese government far beyond a level that exists in the U.S. The Chinese government even has a board seat and some ownership of a key ByteDance company. (ByteDance also has several other apps that rank very well in U.S. app stores.)

Facebook has been used by domestic and foreign actors to spread misinformation, influence hearts and minds, and generally damage the fabric of society.

It then doesn’t take a lot of imagination to realize that if the company responsible for the algorithm itself were trying to set an agenda (in this case, at the behest of the CCP), it could do so by directly altering the algorithm in a way that could have much more significant impact than by external actors trying to game the system. It’d be a good way to subtly spread propaganda or stoke social discord in a mostly inscrutible way.

Data surveillance is also an issue, but that doesn’t seem as significant. The U.S. government certainly has the ability to covertly compel production of certain data from private companies. You’d also think someone would notice if the apps were trojan horses for spying on whatever else is happening on a user’s device.

TikTok is banned within China itself, which is pretty telling.

The result is a national security threat that seems very real. It’s hard to see another outcome from this situation other than for TikTok to be spun, or banned. I won’t be sad if either of those things happen.

Some may argue that banning TikTok would be hypocritical of the U.S. The U.S. regularly decries China for muscling out U.S. tech companies from the Chinese market — only to threaten to turn around and do the same thing. This is a whataboutism-type argument which doesn’t respond to, nor undermine, the genuine concerns above. Nor does it undermine the notions of sovereignty and what’s in the national interest that allow countries to engage in this behavior. It’s not ideal, but there’s no reciprocity here.

Further Observations

  • I found out what happens when you forget to remove crayons from the pockets of your pants before you throw them in the wash. It looks like my toddler drew all over our clothes. Bits of melted wax everywhere. The internet’s solution of washing the clothes in very hot water, dish detergent, vinegar and washing detergent was not completely effective. ChatGPT was unhelpful. Wife not amused.
  • And while I was at it, my wallet also took a bath in the washing machine.
  • We’re almost at the end of Season 3 of Picard and not only is it far superior to the other two seasons, they dialed the nostalgia up to 11, and they did it pretty tastefully. This week’s episode is the penultimate one for the series, and it was amazing. TNG was an indelible part of my adolescence, and it’s kind of wild seeing the entire cast re-assembled, 30 years later.

Articles

Image of the Week

On Twitter

  9:00pm  •  Life  •  Tweet This  •  Add a comment  • 
9
Apr 23
Sun

Weekly Report: April 9, 2023

Observations

  • The kids picked up something nasty at pre-school and it took the whole family down this weekend. So, only a short update for this week, but a long and interesting articles list below.
  • Google Flights is my tool of choice for finding paid airplane tickets. It’s quick, flexible, and links off to other sites to complete the booking (I greatly prefer booking directly with airlines rather than third parties like Booking.com, Dreamz, etc., which can be a huge pain to deal with if something goes wrong.) Google flights now offers a limited guarantee where they will reimburse the difference if the fare drops after you book. There are limitations, of course:
    • You need to be signed in your Google account and use U.S. details when you make your booking (currency, phone, address).
    • The booking needs to be no more than 60 days in the future
    • It’s only offered for some itineraries (denoted by a badge).
    • $500 maximum reimbursement, max 3 times a year.
  • Apparently an unofficial slide that an associate at Paul Hastings prepared has been making the rounds:
  • While Paul Hastings was quick to disclaim this as the firm’s official position, about 70% of this is accurate, I think. Even for someone who routinely spends these dollars on outside counsel, #3 is not a reasonable expectation (except, perhaps, during crunch time of a major transaction) and firms should be able to manage staffing to give their staff time off and appropriate coverage. #4 I don’t expect everything to be done yesterday. It’s not always possible, I try to give reasonable timeframes and don’t like to put outside counsel through needless fire drills. #7 “No poor connections” — we’re all at the mercy of Comcast. #9 is totally an acceptable answer for an associate, as long as you follow it up with, “but I’ll find out”. #5 is an important point. If you pay $1,000+ per hour to anyone for anything in life, you’re going to expect gold-plated service.

Articles

Charts, Images & Videos

Source: Reddit

On Twitter

  10:00pm  •  Life  •  Tweet This  •  Add a comment  • 
2
Apr 23
Sun

Weekly Report: April 2, 2023

Observations

Initial Thoughts on AI

I have been mulling writing a lengthy post about AI, but there is so much happening, and it’s happening very quickly, so I’m still processing it all. So instead, here is an assortment of initial thoughts. I’m sure there will be more in the future.

The New New Thing. In my lifetime, foundational developments in information technology have happened every decade or so: the personal computer, the internet, and the rise of mobile. All of these developments changed the world. (You’ll notice that absent from my list is crypto and the metaverse.) Generative AI now joins that list and I think not only is it as significant as the others, it’s on track to be in a league of its own.

Speed. The pace at which the technology is developing, and the practical applications and industry springing up to take advantage of it is crazy fast.

  • The most prominent player in the GAI space is OpenAI. GPT-3 was originally released in mid-2020, but it wasn’t until OpenAI released ChatGPT, a chat bot based on the GPT-3 large language model (GPT-3.5 to be precise), that the technology sprang into the public consciousness. ChatGPT received a mind-blowing 100 million sign ups in 2 days in November 2022. OpenAI released GPT-4 last month, and upgraded ChatGPT to use it (available as a paid subscription to ChatGPT Plus). GPT-4 is significantly better than GPT-3, and can handle both text and image inputs. GPT-5 is rumored to be released by the end of the year.
  • The GPT models are static, relying on the information they were trained on at the time. As such, they are more or less frozen in time. ChatGPT plugins allow ChatGPT to access external information sources, and perform actions against external systems. This multiplies the power of ChatGPT. The fascinating thing is that you can use ChatGPT to help integrate itself with other systems.
  • There are tons of companies, products, and use cases that have been created just this year that leverage GPT as well as other AI models that generate images, audio, video, and code.

How It Works. Read Stephen Wolfram’s article for a detailed description of the concepts behind text-generative AI. Even if you don’t understand all of it (I certainly didn’t), you’ll still get a good idea of how it conceptually works even if you have to give up halfway through.

Use Cases. There are far too many use cases and examples of amazing things generative AI can do to list. Here are a smattering of interesting ones I have been thinking about lately:

  • Mimic someone’s voice just by providing an AI model with 30-60 seconds of audio. The output comes complete with accents, foreign languages, and vocal tics.
  • Inventing an idiom.
  • Out of all the major voice assistants (Google, Alexa, Siri), Google is the most capable. But it’s no ChatGPT. Once you plug AI into them, you’re going to see supercharged voice assistants capable of conducting conversations. It’s not only the quality of their answers that will improve, or what actions they will be able to perform, but AI will improve the quality of voice recognition too. And if you want your voice assistant to sound like anyone on the planet, it’ll probably be possible to do that one day (see “mimic someone’s voice” above).
  • Within literally 10-15 minutes, I was able to create a working script for monitoring the prices of securities (both by scraping stock prices from a webpage and crypto prices from the Coinbase API), recording prices to a database, and emailing price alerts, just by asking ChatGPT about 5 questions (some debugging was still required).
  • In 2017, I spent a week writing code that helped me arbitrage BTC and ETH between a half dozen exchanges. I estimate I could have completed that in maybe a third to half of that time if I had help from AI.
  • Technical analysis and algorithmic trading.
  • Legal document analysis and review.

Existential Risk. AI has issues. A ton of issues. The most splashy one is existential risk. The basic idea is that AI at some point will be able to develop itself, leading to a runaway super-intelligence that outstrips our ability to control it (a technological singularity). The human race becomes casualties, whether through some concept of malevolence or just collateral damage as a result of “optimization”. It doesn’t necessarily require an AI that is “self-aware” — it may just be the rampant execution of internal logic that results in major unintended consequences.

  • An open letter signed by Elon Musk, Steve Wozniak, and a host of luminaries in the AI world calls for a six month pause in the training of AI systems more powerful than GPT-4. They write:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

  • Of course, there are those that plain disagree, saying that AI is nowhere near the stage which we should worry about (e.g. we’re still far away from building an artificial general intelligence), or that the worry-warts have ulterior motives (they want the additional time to catch up to Open AI), or that in practice there’s no way to halt AI development, so why bother?
  • The latter school of thought is emblematic of today’s world in which lack of expertise in a field isn’t a barrier to expressing a strong opinion. Strong opinions are provocative, and that is a means of getting attention. In reality, we just don’t know what’s going to happen with any certainty. Even the creators of generative AI models can’t predict what their own creations will do as they develop. But when the potential risks are astronomical, then even a far-fetched and fanciful probability that they may come to light means those risks deserve thoughtful introspection, humility, and caution.
  • Given the potential for wide-ranging harm that generative AI has, I do think we need to get some tentative regulations and protocols in place, or at least start a robust discussion about it between all the major players. Had we known that social media would have the side effects it does today (misinformation, teen depression, etc.), perhaps some things may have turned out differently?
  • The idea of the Great Filter is one theory why we haven’t found intelligent alien life yet. The theory goes that there is some universal, insurmountable step that prevents civilizations from expanding into the galaxy. If it exists, we cannot know whether we have already passed that step, or whether it lies ahead of us. Nuclear war and climate change are traditional candidates for a future Great Filter that are in play. Runaway AI is another one that has been theoretical to date, but it feels more real today than ever.

More Issues.

  • Dystopia. Even if AI turns out not to be an existential threat, I don’t think anyone disputes that it will eventually cause massive societal change. The short story Manna presents a dystopic and utopic view of AI. The dystopic threat doesn’t look like Terminator robots, or a Skynet-type overlord, but simply a bunch of machines optimizing situations without any regard for humanity. And consider the court system. Courts are the final avenue of appeal for disputes, including those that result from automated decision making (e.g. your insurance claim got denied). Now, it doesn’t take a big leap of imagination to cut out human judges and start having computers render judgments in smaller forums, like small claims courts. After all, computers have the ability to synthesize the entire history of jurisprudence to figure out a ruling. And then, keep sliding down that slippery slope. (There’s a reason why the GDPR requires human intervention to be available where an individual disputes an automated decision based on their personal data.)
  • Bias & Misinformation. Generative AI has issues with bias — including political and racial — and being confidently incorrect. Therefore, there are real concerns about models having the ability (or some would say being intentionally imbued with the ability) to persuade and misinform, and the ability to help with misinformation campaigns. Additionally, Twitter believes that the challenge of detecting bots is much harder with AI in play, since sorting out who’s human and who’s not just got more difficult. In response, they are moving focusing on paid subscriptions as a way to help verify identity.
  • Criminal Enterprises. This one is going to be a real big problem. Think of all the scams you can improve with AI. Related to deepfakes.
  • Legal Issues. There’s a lot here.
    • The biggest one relates to copyright infringement. AI models are trained on a huge amount of data, and almost all of this data is owned by someone else. Is use without permission infringing, or is there a fair use argument?
    • There are also data privacy concerns. Italian privacy regulators have banned ChatGPT due to concerns with GDPR compliance.
    • Ownership of materials generated by AI is also up in the air. Just like for a photo taken by a monkey, art generated by a computer is not eligible for copyright protection, according to the U.S. Copyright office.
    • AI makes mistakes, and so people may get into various sorts of trouble if they blindly rely on what AI says. You can put all the disclaimers you want, but people aren’t going to be bothered to double check what the computer says a lot of the time.
    • Information ingested by AI models also raises issues relating to compromising the confidentiality of any sensitive information obtained. The information itself isn’t a discrete component of a model, so it’s hard to extract.
  • The Chinese Government. Using AI models requires a great deal of computing power. Queries are therefore sent off devices for processing in the cloud, but it’s foreseeable that devices will be able to do that processing on-device at some point. There’s a hypothesis that Apple, with its impressive homegrown chips, is maneuvering into a position to be able to do this (and it’s also in line with its emphasis on privacy). It’s tougher to censor what is happening on a device, so this represents a challenge for the Chinese Government’s censorship efforts. I imagine this will lead to the technology and any hardware that runs it being heavily regulated, or even banned. It is of course possible to build models that return answers that toe the party line (OpenAI has been accused of injecting liberal bias into their models, and they also censor results that are deemed harmful), but AI is tougher to directlycontrol because it’s a bit of a black box. Travel blogger Gary Leff writes: “Perhaps AI and Large Language Models represent the next best hope for an end to repression in China. If China wants to compete in this space, they’ll have difficulty doing it behind the Great Firewall. Their tools won’t be as strong as the ones from the West, with access to more knowledge to train on. Is there a stable equilibrium where their AIs can train on unrestricted content, but answers from the AI remain restricted? What about when everyone has AI chatbots on their phones, rather than central servers?” I’m unfortunately not optimistic about this, as in earlier times people thought the internet couldn’t be tamed, but the Chinese Government has done a pretty darned good job of it.

Miscellaneous Stuff.

  • It has been said that the internet makes information globally available, so gaining an advantage is less about who has access to information, but who can analyze it the fastest and execute on the actions needed to capitalize on it. Now, with AI helping to analyze information, and potentially being able to act on it, where does that leave humans? A lot of valuable content will now consist of real-time reporting and opinion pieces, since research and summarization is something that AI is pretty efficient at.
  • I’ve also tried Google Bard and Bing Chat, but ChatGPT is the best (and it can write code).
  • ChatGPT uses a “prove you are human” Captcha, which I thought was ironic. Chaptcha’s days are numbered.
  • When I joined a law firm after graduating from law school, a second year solicitor was assigned to show me the ropes (we both reported to the same partner and were on the team that did work for Microsoft). That person is now the Chief Responsible AI Officer for Microsoft. I haven’t spoken to her in the 15 years since I left the firm, but that job must really be something!

Further Observations

  • Trump got indicted on over 30 counts on Thursday. The counts are supposed to become public next week. Interesting times ahead.
  • The CTFC is going after Binance. Binance’s response is a bit lame, and there’s a lot of stuff in the CFTC complaintthat really doesn’t look good for Binance. One example: “Binance’s senior management, including Zhao, knew the Binance VPN guide was used to teach U.S. customers to circumvent Binance’s IP address-based compliance controls. In a March 2019 chat, Lim explained to his colleagues that ‘CZ wants people to have a way to know how to vpn to use [a Binance functionality] … it’s a biz decision.’ … in a July 8, 2019, conversation regarding customers that ought to have been ‘restricted’ from accessing the Binance platform, Lim explained to a subordinate: ‘they can use vpn but we are not supposed to tell them that … it cannot come from us … but we can always inform our friends/third parties to post (not under the umbrella of Binance) hahah.’”

Articles

Movies & TV

  • John Wick: Chapter 4
    You know what you’re getting, and there’s 2 hours and 49 minutes of it. And Keanu only speaks about 360 words in the whole movie. I was thoroughly entertained.

Charts, Images & Videos

Source: Republic Bank

On Twitter

  9:00pm  •  Uncategorized  •  Tweet This  •  Add a comment  • 


ARCHIVES
2024: Jan
2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2021: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2005: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2004: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2003: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2002: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2001: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
2000: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1999: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1998: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec