25
Nov 08

Diving into GIT

Over the last year, I have known that the day would come when I could no longer avoid moving from SVN (my comfort zone) to this new beast called GIT that everyone is so excited about. My first hour, which was installing it on my Mac and pulling down a repo to play with was very pleasant. My ssh keys and ~/.ssh/config was already setup the way I wanted and everything just worked.

The pain began when I started in on moving our build slaves over to GIT. Of course two of the three are running Windows because we have to run the Windmill tests against IE. In an attempt to keep things simple, I wanted to avoid installing cygwin on the machines, so I tried msysGIT. Oh WAIT I have to get ssh to work before I can actually use GIT to pull down the repo. 

After trying and failing with openSSH, I finally realized that the Windows puTTY Package was the best way to go about this. There was much frustration involved with this process because it requires that you take the ssh key you generated on your mac (and had added on the server) and convert it to a puTTY ppk. Fortunately this turns out not to be that panful using puTTYgen. The next piece to the puzzle was to get pageant to load this key automatically when the machine boots. I went through some rigamarole trying to create a shortcut in the “Startup” items and appending a string to it’s path to get it to load the correct key. This didn’t work, I’m sure it was a combination of Windows being terribly un user friendly and my brain expecting things to “Just Work TM”. Finally I just created a short cut for the actual key and stuck it in the “Startup” folder, and it works! (You still have to enter your pass phrase on boot) but its better than the alternatives. I actually found a post on a puTTY forum where someone was asking to automate this piece too and the response was basically, “No, never, die. Doing that defeats the purpose of SSH”. 

One step that I forgot to mention is that to get GIT to use the right SSH key you have to set the environment variable: GIT_SSH=path/to/putty/plink.exe

Next I went and started my git clone, which kicked off perfectly. That wasn’t so bad! 46% later at 1.99GB of the retrieval process I received two fatal errors that looked similar to file system errors. I then did some Googling to find that msysGIT only supports repository’s up to 2GB. The answer on the forums for this problem was, “have a smaller repo”. This is a problem, so I head back to the GIT homepage, where I find that the only real lasting option is to use cygwin GIT. 

After I get cygwin GIT all installed I finally get the repo fully downloaded I check to see if I can use the msysGIT to pull changes (which should never be 2gb worth), and it absolutely freaks out. So that’s not an option, and neither is adding cygwin/bin to your path because that complains about ‘exec ssh’ not existing.

Your probably wondering, why don’t you just use cygwin to do your GIT stuff? Well the issue is that we are using hudson to queue up jobs on the machines using the cmd environment to pull changes and run the tests in the repo. And it became very clear that the best way to handle this was to run the slave agent from cygwin, that way the jobs are actually running in cygwin land and this turned out to work — awesome!

To finish describing my pain, the last problem was that since I am running the tests from c:\main, (you have to be inside a git repo to pull and I want to keep these jobs simple) I am generating the log file in the wrong place, because the job looks in it’s home directory to find the jUnit Compatible XML results file. Fortunately there are some environment variables that I can use to get the files back where I want; mv *.log %WORKSPACE%

Now that life is back to normal and I’m getting used to living in the new GIT world, I can get back to important things like making Windmill more awesome and increasing test coverage.

Cheers!


19
Nov 08

Windmill Gets a Facelift for 1.0Beta1

Working up to the Windmill 1.0 Beta 1 Release, I finally had the opportunity to put some time into making the IDE (that a lot of you live in when in test writing mode) a little bit nicer to look at.

The IDE has been growing organically since 0.1 and there was a lot of functionality hacked into it that wasn’t in the original game plan, so I did what I could to improve the beauty of the CSS/Layout as well as the whole mess of code behind it.


Launching
If you have used our latest release, or are running trunk you know that we have significantly improved the load times for the Windmill IDE. By compressing the JavaScript when the service is instantiated we can simply hand the IDE window one file that contains the vast majority of the required code.

The reason that this makes such a huge performance difference is because we are loading the source via the local windmill proxy and the data size size had very little impact, the overhead was in the browser two connection limit. When you have to pull down ~30 files two at a time it takes its toll and made the IDE feel very sluggish and more like a web page loading than an IDE.

In the process of figuring out exactly what was slowing down the launch time we added some more informative messages and output so you don’t sit there staring at a twirling circle graphic wondering if anything is happening. And to make the experience even more fun, I couldn’t help but implement a progress bar.

General Layout
I removed the toolbar at the bottom of the screen, which I felt it was an irritation for test editing (especially with the drag and droppable actions). It is now in a drop down menu at the top right of the screen, with the rest of the UI access to IDE functions.

Settings and Firebug Lite Improvements
The settings dialog has continued to improve by implementing more useful defaults, adding new options, removing deprecated options and simply making it just look better. Thanks jQuery UI!

Firebug Lite has been a very popular feature since we first announced it, which has led to a handful of bug fixes over the last month. The most major of these was that the initial Windmill implementation of Firebug Lite required you to have Internet access as it was using resources that were hosted elsewhere.

These have since been copied to our source tree and are made available by the Windmill server so you can happily introspect your Web Apps JavaScript while writing tests on your Intranet.


Output and Performance
Instead of writing all the raw windmill output to the output and performance tabs there is now an array called windmill.errorArr, where all terrible errors and warnings about technical details are pushed in the case you are interested to see all that data. However, it’s more likely that you aren’t and scrolling through all that output data becomes tedious.

This is why we have implemented output in blocks with the background color representing pass/failure with green/red (white for performance). These blocks are expandable, clicking them will reveal all output (or performance information) we know about the action that was executed. This should give you a faster general overview of your results and allow you to quickly see the details you care about.


Other Worthwhile Mentions
We moved our XPath implementation from Ajax-Slt to JS-XPath, which has proven to be more accurate when it comes to resolving XPath generated in Firefox (or using Firebug) against non XPath native browsers such as IE.

Many bugs and improvements have been made to the DOM Explorer, which should now feel a lot more like the Firebug DOM inspector, but should work in any browser.

We have also put a lot of effort into improving the communication between the JavaScript Controller and the Python Service so that when a test fails you get as much detailed information in the service as you do in the IDE.

Timing and MozMill
The timing has lined up nicely as we are working on both a 1.0 release for Windmill and MozMill. MozMill is geared towards automated testing of all applications on the Mozilla Platform and functions in the trusted space providing lots of very useful flexibility.

You can currently try out MozMill 1.0rc1 as a Firefox Add-on, and keep your eyes pealed as some exciting new MozMill feature work is around the corner.

Participate
We are always trying to make life easier for the test writer, so please log your bugs and feel free to come chat with us in #windmill on FreeNode.


19
Sep 08

Zero to Continuous Integration with Windmill

Following ‘automation’ and ‘continuous integration’ in the micro blogging world I have seen a major influx in people being super interested in functionally automating their web apps. I have seen a slew of things about Grid, and Selenium, and people hacking on Watir so I decided to show you from the ground up how incredibly easy it is to get automated test running setup using Windmill and Hudson. I am not going to walk you through every detail, this is much more high level but I do plan to start a ‘continuous integration’ page on getwindmill.com in the near future for those kinds of details.

The first step is to get a couple machines that you want use as slaves and a machine to run Hudson, our setup looks like this:

Each of the machines with a different OS has Windmill installed. To make them slaves you simply bring up the Hudson web page on the machine, and run the launcher.. now it’s a slave — crazy easy right?

Now to setup test runs for the machines, in Hudson you click: “New Job” on the left hand side and do something like the following:

Tie this job to the slave you want it to run on (we can’t have IE runs happening on MacOSX):

Tell this job to run 10 and 30 minutes after the hour:

The build steps to actually run the tests, the first kills any straggling processes (more details below):

On the Mac for the Safari job, I want to make sure there aren’t any instances of Safari left hanging, or windmill processes sitting around so we do:
ps -ax | grep windmill | awk '{ print $1 }' | xargs kill | true
ps -ax | grep Safari | awk '{ print $1 }' | xargs kill | true

Then we want to grab the latest test code from svn and launch the windmill test:
svn up /Users/adam/Documents/main_bt/windmill/
python /usr/local/bin/windmill safari http://www.facebook.com test=/Users/adam/Documents/main_bt/windmill/fb email=username@slide.com password=pass report=true exit
rm /Users/adam/Library/Cookies/Cookies.plist

I am telling windmill to run a test against facebook.com, with the test hierarchy in the windmill/fb directory in Safari, with the provided email and password, then to report it’s results and exit.

The only thing different on our windows test runs is the way we kill the processes:
Example:
taskkill /F /T /IM windmill.exe
taskkill /F /T /IM firefox.exe

You might be asking how do I use those variables, check it out in my setup module:

1
2
3
4
5
6
def setup_module(module):
    client = WindmillTestClient(__name__)
    client.type(text=functest.registry['email'], id=u'email')
    client.type(text=functest.registry['password'], id=u'pass')
    client.click(id=u'doquicklogin')
    client.waits.forPageLoad(timeout=u'100000')

You can also read a great entry about adding reporting to your tests on Mikeal Rogers blog, here.

And that last line removing Cookies.plist makes sure that the next test run starts without any cookies set to cause problems.

Have Hudson keep you updated on Jabber:

Grab the generated XML output so you can view the test results in Hudson:

Do this for each of the test runs you would like to have, and boom — continuous integration:

This is obviously a simple scenario, and you can do way, way more customization.. but this should get you off the ground. Happy testing!



 


04
Sep 08

Bringing Windmill to Life

Windmill Logo

Project Status

I have spent nearly every day since July 7th working to bring the Windmill Project up to a level where it can be used reliably in a production environment. Our mission starts with “Windmill is a web testing framework intended for complete automation of user interface testing”, of course this refers to the web including everything and anything inside the browser window. This turns out to be a very large task, one that only an Open Source labor of love could possibly attempt to accomplish.

Windmill has slowly evolved as a project with user contributions, a moderately active IRC channel, and enough users to keep me from forgetting what a useful and powerful tool it is. When I was offered the opportunity to work on the project I quickly saw how much needed to be done in order to get to where we needed to be. We still aren’t quite there, and like most Open Source projects we might not ever get to the envisioned perfection, however recently we hit a very important milestone. The project is now fully hosted and run by the committers, and in many ways “Grown Up”, thanks to a lot of good advise and hard work. The milestone we have reached, is that Windmill is ready for YOU to use. This week we pushed 0.8.2, which is a release that has addressed all of the major issues that we know about and have discovered with heavy usage over the past months. Our hopes are that you will go install Windmill 0.8.2 and things will just WORK. If not, I can’t wait to get your issues in trac and see what we can do to fix them.

Priorities

The main things we care about when it comes to our web testing tools:

  • Low barrier to entry, low learning curve, and ease of use
  • Thorough documentation, community and project support
  • Support for the big 3 platforms; Windows, MacOSX and Linux
  • Support for the big 4 browsers; Firefox, IE, Safari and Opera
  • Easy integration with continuous integration tools
  • Reliability; developers aren’t going to pay attention if the failures aren’t real
  • A really nice looking logo, and a web site that is easy on the eyes..

There are always more features to implement, but Windmill hasn’t needed new features for a very long time. What Windmill needed was some serious QA, some code cleanup and a whole mess of bug fixes. If you look through the Trac Timeline you will see the massive amounts of all of the above that have happened and I am proud as hell when I launch the application today and see all that it can do.

What can Windmill do?

Windmill offers the ability to build, write, record and run tests as well as aid in debugging and development. In addition, the framework provides the ability to create and maintain hierarchies of smart and thorough tests that will ensure the quality of your web applications over time. Not only can we save you hours creating and maintaing tests, but we can also help you see your web application as a growing feature rich product, instead of a QA nightmare.

Many tools out there provide ways to write tests, some even provide recorders and DOM explorers, but none that I have ever seen provide this rich functionality cross platform and cross browser, which is really what is required in order to build a thorough test repository that represents all your possible users.

The current set of major features can be found at the Windmill Features Page as well as more details about what is currently available. One of the more exciting new features is the full integration with Firebug Lite. Web developers rely on the existence of Firebug in order to quickly build and debug web applications, and Firebug Lite is the next best thing. It’s hard to even describe how useful it has been to instantly access the JavaScript Console and DOM inspector in IE to debug a failing test. As the Open Source community grows, and tools are improved and brought to light, I think it’s very important to do everything we can to utilize these tools and use them to enhance the Windmill Framework.

Keeping it Open

The Open Source aspect of Windmill has turned out to be it’s greatest asset. The project is almost entirely written in JavaScript and Python, which instantly gives us many advantages over the competition. The JavaScript community is constantly evolving and is most certainly the futures technology platform. Python has a very strong community as well and has given us immense amounts of functionality and flexibility right out of the box.

One of the most exciting things to me personally about this particular project is the immense potential user base out there, and the large impact the Windmill Tools can have on the daily work flow of it’s users. Windmill was obviously inspired with the hopes of minimizing the need for manual testing of rich web applications, and has grown to be much more than that.

The future of the work to be done on Windmill will primarily be driven by the needs of it’s users, the changes and development of the industry and the success of it reaching the goal, to make web automation better.

Moving Forward

Concluding this major push of work, testing, documentation and moving of infrastructure; we now need to see how the community feels. There are lots of choices out there for web automation and we have made many differentiating choices along the way. It is now time to get the word out and take in some real feedback.

Thanks you all for input, contributions, patience and valuable feedback. Those of you who spent many hours on Freenode in #windmill with us debugging and hunting down those spastic blockers are troopers and we really appreciate it.


29
Aug 08

JUnit Compatible Reporting for Windmill

A large part of the utility in a testing framework like Windmill is the ability to interoperate with a continuous integration environment. Much of the work that has gone into Windmill recently has been the result of continuous integration needs. There are many ways to do this with existing software packages out there that include Tinderbox,Buildbot and Cruise Control however we picked Hudson as a result of the super small learning overhead and amazing simplicity required to setup slaves on the network.

One of the requirements of course for parsing results is the need for JUnit compatible XML output from the Windmill test runs. I don’t claim to be a Python wizard, or a XML/Java wizard for that matter but it wasn’t that painful to hammer out a function to generate some minimal output to get the process off the ground.

I would love to get a wiki page up on Get Windmill to start documenting the many ways to use Windmill in a continuous integration environment. So let me know if you have a working setup and would like to contribute.

Example Reporting Excerpt from __init__.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
from functest import reports
from datetime import datetime
 
class JUnitReport(reports.FunctestReportInterface):
    def summary(self, test_list, totals_dict, stdout_capture):
 
        total_sec = 0
        for entry in test_list:
            time_delta = entry.endtime - entry.starttime
            total_sec += time_delta.seconds 
        out = '<?xml version="1.0" encoding="utf-8"?>\n'
        out += '<testsuite errors="'+str(totals_dict['fail'])+'" failures="'+str(totals_dict['fail'])+'" name="windmill.functional" tests="'+str(len(test_list))+'" time="'+str(total_sec)+'">\n'
 
        for entry in test_list:
            if entry.result is not True:    
                entry_time = entry.endtime - entry.starttime
                out += '<testcase classname="'+entry.__name__+'" name="'+entry.__name__+'" time="'+str(entry_time.seconds)+'.'+str(entry_time.microseconds)+'">\n'
                out += '<failure type="exceptions.AssertionError">\n'
                #out += str(stdout_capture)
                #until I can figure out how to get the traceback
                out += 'There was an error in '+ entry.__name__
                out += '\n</failure>\n'
                out += '</testcase\n>'
            else:
                entry_time = entry.endtime - entry.starttime
                out += '<testcase classname="'+entry.__name__+'" name="'+entry.__name__+'" time="'+str(entry_time.seconds)+'.'+str(entry_time.microseconds)+'"></testcase>\n' 
 
        out += '<system-out><![CDATA[]]></system-out>\n<system-err><![CDATA[]]></system-err>\n'
        out += '</testsuite>'
        f=open('continuous_test.log','w')
        f.write(out);
        f.close()
 
reports.register_reporter(JUnitReport())

Happy automating!


30
Jul 08

OSCON 2008 Recap

This year was my second year at OSCON in Portland, and it’s pretty amazing for me to look back at last July and know that I was working at OSAF. A lot can happen in a year, but what didn’t surprise me was the amount of people that I interacted with at the con that I had met during my OSAF experience.

A few things come to mind when I think about the conference as a whole. First off, who gave OSCON a Ruby adrenaline shot? The Ruby track was pretty extensive, and I would say more prominent even than the Python track this year. I felt like many of the talks were very introductory with very few actual visual demo’s of things “working”. I know that OSCON brings a very diverse crowd.. but please, please come up with some way to show us if things are advanced, or not. I really get absolutely nothing out of introductory level JavaScript sessions, but a title like “Digging into the guts of JavaScript” could pretty much mean anything under the sun.

Some of the most interesting talks I attended last year had to do with open mapping and location services, I know you want us to also attend the “Where” conference, but these things are part of Open Source and should be represented at OSCON!

I really enjoyed the talk about CouchDB, I hadn’t heard about it and really enjoyed how it opened my mind up to some new concepts about how your application should interact with a database. I would advise everyone to check it out at http://incubator.apache.org/couchdb/.

Another was the “Django Tricks” talk, this was great because he just ran through a bunch of really cool examples — one of which was introspecting a sqlite db to build models from the schema. Pretty cool stuff! Additionally, I think Ted Leung nailed his talk about “Open Source Community Antipatterns”. A lot of the ideas and concepts weren’t new to me, but it always helps to get a more detailed overview from someone who has seen these patterns repeated over the last 10 years.

The best quote I heard was that the “Second OSCON starts at 6pm each night.” I completely agree with this, the social aspect of the conference is invaluable, but be careful about all those free booze — they sneak up on you if you aren’t careful.

I do feel as if I should have done a Windmill talk this year, I didn’t see anything from Selenium or Watir and if we had been a little farther a long with the next iteration on Windmill it would have been a great venue to get some serious exposure. I may attend some other conferences this year, or wait till OSCON next year for Windmill to make it’s big splash.


23
Jul 08

Oscon 2008 Schedule

Every year I like to make myself a road map of how I will be spending my time during OSCON. As there are so many interesting possible talks, gatherings and social events it’s tough to get to all the things you care about.

At this point in my career my focus is on Web Development, Test Automation (specifically for the web & browsers), and social networking. Obviously on a moment by moment basis your interests are pulled in varying directions, but that sums up the bulk of my attention.

If you are interested in the full schedule grid, it can be seen here: Oscon 08 Schedule Grid.

Wednesday

  • 8:45 AM: Welcome
  • 9:30 AM: Keynote
  • 10:45 AM: “An Introduction to Ruby Web Frameworks” (It’s going to be tough to convince me to move away from Django)“Changing Education… Open Content, Open Hardware, Open Curricula” looks more interesting today.
  • 11:35 AMThis one is tough, either “Web Graphics and Animations without Flash”, “Beautiful Concurrency with Erlang”, or “Beyond REST? Building Data Services with XMPP PubSub”, “CouchDB from 10,000 ft” apparently thats the thing see, or “What Has Ruby Done for You Lately?
  • 12:20 PM: Really important, LUNCH!
  • 1:45 PMProbably “Thunderbird 3″, maybe “The Open-Source Identity Revolution”
  • 2:35 PM: “Caching and Performance Lessons from Facebook”, never know when this one might come in handy working for Slide inc.
  • 4:30 PM: “Open Source Community Antipatterns”, I’m really looking forward to hearing Ted Leung explain how to NOT run an Open Source Project…
  • 5:30 PM: Probably “Give your Site a Boost with memcached”, or “Shell Scripting Craftmanship”
Thursday
  • 8:45 AM: Keynote
  • 9:30 AM: Keynote
  • 10:45 AM: “Open Source Microblogging”
  • 11:35 AM: “This is Your PostgreSQL on Drugs”
  • 1:45 PM: “CSS for High Performance JavaScript UI”
  • 2:35 PM: “Stupid Django Tricks”
  • 4:30 PM: Either “Fixing Hard Problems Through Iterative QA and Development” or “Effective Software Development with Python, C++, and SWIG”, as I have worked with both speakers (Clint Talbert, Robin Dunn) respectively. OR “Machine Learning for Knowledge Extraction from Wikipedia & Other Semantically Weak Sources. This is a hard one..
  • 5:20 PM: Couple interesting choices jump out at me here: “Code is Easy, People are Hard: Developing Meebo’s Interview Process”, or “Designing Political Web Apps for MoveOn.org” both could be really cool.
Friday
  • 9:30 AM: Plenary
  • 10:45 AM: “Toward a Strong Open Source Ecosystem” by Sara Ford at Microsoft? Interested to see what she has to say…
  • 11:35 AM: Oh hell yeah, “Searching for Neutrinos Using Ope Source at the Bottom of the World”
  • 12:30 PM: Plenary
  • 1:30 PM: Plenary, Bye Bye’s
Off to the train to Seattle…

 

I am going to try a new thing using the Word Press app on my new iPhone 3G, to jot down small blog entries of points during the talks, then fill out the rest of the entry with more detail later.
It’s 2:41 now, so lets see if I can get to that 8:45 AM.. yowch.

11
Jul 08

iPhone 3G — The Saga Continues.

As you all know — this morning at 8 AM PST, the new iPhone 3G was made available at Apple and AT&T stores on the west coast. Being a compulsive early adopter of such things, I somehow managed to tear myself out of bed around 6 AM this morning and head down to the Apple Store in Emeryville Califorinia. I arrived somewhere between 6:30 and 6:45 AM, and even though deep down I knew that it was going to be ridiculous — the whole experience still managed to be much crazier than I expected.

Approaching the Apple Store, every step revealed more and more people waiting in the line that stretched most of the sidewalk in front of the Emeryville Mall. In my relatively delirious state, I excepted the situation and joined Mikeal in the line. About 15 minutes into our wait the Apple Store staff, and Mall security people started alerting people at the end of the line (where we were) that the direction the line was building to the left of the Apple store was going to be in the way of the construction and that they would like everyone to move to the right side of the store, but to stay in the same order. Clearly this is an absolutely ludicrous request considering that no one wants to be there waiting in the line, and everyone will do whatever they can to jump a few spaces. I instantly started walking to the other side of the store where no one was yet, and we found a nice set of steps to sit on about 50 feet from the front doors of the store. Instead of people being annoyed by us, then went ahead and built a new line with us in it putting us significantly closer to the magical new phone than we had been before.

I do have to admit that the fact that the Apple employees were constantly walking up and down the line passing out water, answering questions and passing out necessary information did in fact distract us enough to keep me from losing my mind. The Pandora guys stopped by to chat, and gave out some pretty sweet hats. I have since tried the Pandora App on the new phone and it is really slick, certainly recommended.


Somewhere around 9:15 we made out way into the store, to be greeted by another line that lasted around 15 minutes before we could actually talk to a sales specialist to do the deed. This is where things started to fall apart for me. My sales specialist (who was a pretty cool guy) disappeared into the back and came back with the box for my new 16g white iPhone 3G and started filling out the hand held device to complete the sale. After inputting all of my information, a big yellow box pops up on his screen saying that I am not eligible for the AT&T price and that my only option is to pay the full $699 to buy the phone without a plan. Considering that I have been with AT&T since the acquisition of Cingular, and my having an iPhone with them for a year I couldn’t understand that the problem could be. Instantly I got AT&T on the line (which was amazingly fast to get a rep on a day like today) who proceeded to tell me that I had an overdue balance (due yesterday) and that I haven’t been with AT&T long enough to be eligible for the upgrade and thus will be required to pay full price.

In my delirious state, I considered just paying full boat so I could get the hell out of there — or cancel my plan and just be done with it all. Instead I asked about three times to talk to a supervisor (to which I was told three times that they couldn’t “Override any of the rules”). I do have to interject that she was polite and could have been much more unpleasant (T-Mobile, Verizon, lets not go there), and a few minutes later I was on the phone with her supervisor. You must keep in mind that my poor sales specialist is standing there, with my phone half rung up (probably there since 6 AM as well) looking at a long day of selling phones, dealing with unruly Apple Fans and possibly having to listen to many unpleasant phone calls to the carrier. The supervisor after a few minutes of back and forth about the situation, and the realization that I was standing here in this situation announced that “If I pay my overdue balance, I can get the discounted rate.”, Hallaluia!

I’m now all paid up and feeling like a dodged a serious bullet, and it’s time to head to the front of the store to open things up and activate the phone. A woman with a huge camera, filming this whole event asked me a few questions and recorded me opening the phone… which was sort of strange. I wonder if I’m going to be on TV somewhere! We plugged the phone in and whala — a big error pops up from iTunes. We unplugged the phone and tried about 3 more times (as did everyone at the table trying to activate), and then I was released to go finish activation at home.

I’m not an infrastructure guy by any means, but didn’t anyone learn ANYTHING from the last time around? Call me crazy, but I would have assumed that this time around the servers for activation would have been beefed up enough to handle the load. The best part is that as soon as I left the store and went to use my iPhone 2G to call people to let them know I had survived and was heading out I received a “No Service” notice, and was now unable to use either phone.

I basically sat from 10:30AM to 1:30PM trying about every 5 minutes to activate the new iPhone and received the ugly error each time. FINALLY, it went through — and I am back to a working state of communication.

To answer your questions, yes 3G is that much faster. The screen is a slightly different size, the device is lighter, and thiner and the buttons have been enhanced for more satisfying feedback. The camera looks exactly the same, but the Applications store makes it all worth while. I have been told that the phone has a GPS chipset, but for some reason one Application thinks I’m in Seattle and Google Maps thinks I’m in San Ramon — so there appears to be a problem there. One last quibble — every time the phone wants to use your location data, a dialog pops up asking you if it’s okay. I understand the reasoning behind this, but please please please let us turn that off, it’s getting super annoying.




The applications I have installed and are really enjoying include:
- Where, Yelp, Google, Facebook, Jott, Remote, CheckPlease, Pandora, Shazam, Evernote, Movies.app, NYTimes, Whrrl, Loopt, and of course — Twitteriffic.

There are many more apps and games that I am going to explore as soon as I get a moment.

Was it worth it? Of course it was — all this insanity is half the fun.


27
Jun 08

Leaving Rearden Commerce, What’s Next?


What happened?

As some of you may have heard, today I resigned from my position at Rearden Commerce. Leaving a company is never a fun thing, because you know how you feel when you hear that someone else is leaving.. and you can see it in people’s eyes. I have reminded myself multiple times today that I am still going to be 30 mins away, most of my communication with those people has been via email and IM and there is no reason for me not to stay in touch.

Why did I resign?

That’s a very good question. Let me preface this by saying that I really don’t have anything about Rearden that I can point at and say ‘this thing’ is why I left. Rearden is a great company, they were professional through out my entire experience there. They employ many very talented and driven engineers, and they have a great product. My gut feeling after spending some time there, is that they will do very well. The management team is very skilled and they know their market and niche extremely well. Every day I went to work I heard about a new major deal or a small company Rearden had acquired to contribute to their march toward owning the ‘Personal Assistant’ space.

When I first arrived there I struggled with two things, and ultimately wound up being my demise as an employee. I have an extreme passion for Open Source, being part of that community, and giving my time to contribute. So you are probably thinking, ‘Why didn’t I just do that on the side?’ — well the answer is that I did do it on the side and the results were slow and my sleep schedule paid the price. Rearden has a very business/enterprise specific niche at the moment, and building and deploying new features to those customers is a priority (as it should be), but I couldn’t stop my Open Source envy. 

Secondly, a overwhelming majority of their user base is using IE6. As a web developer — the last thing I do when building anything in client side JavaScript is to test it in IE6. I basically hold my nose, load the page and pray that things ‘mostly work’. Now I’m not going to claim that I can ever get away from doing this, but building really cutting edge features based on new technology becomes significantly less probable when you are catering to this crowd. I know that Rearden has some really cool future plans, and is publicly talking about bringing the application to the consumer market — but I’m impatient and I just simply didn’t want to wait.


What’s Next?

I am going to jump right into a gig with Slide Inc. as a Web Developer. However, before I get to any Web Development tasks I am going to be addressing a pretty serious need they have in their QA department. Slide currently has many applications that are used directly on their site, slide.com and on social networks (primarily facebook.com and myspace.com) and right now they have essentially no functional automation.

At OSAF I saw what a major difference automated testing can make, and the reason I am so excited about this is because I was a QA Engineer at one point manually testing a pretty complex web application (Cosmo) and I have seen how much a difference test automation can make in the release cycle, the development cycle, QA test cycles and simply the daily lives of your poor QA teams.


How am I going to accomplish this task you might ask? Thats the best part — I have fixed about 10 bugs in Windmill in the last week, and will be putting whatever effort is required into getting Windmill to a state where we can functionally automate all of Slides application testing. This looks to be a serious win for Slide, and a serious win for Windmill. 

At some point in the future, when I feel that this project is to the point where it can be maintained and built on by the Slide QA teams I will move on to Web Development tasks. At that point a smaller amount of time will still be allocated to maintaining Windmill, adding new features that Slide and the community need and working towards the next evolution of Windmill. That is quite a ways off in the future, so I will address all that when the time comes.

The rest of my ‘free’ development time, will be consumed by a project that I am involved in with the Mozilla Corporation. This project lives in the QA realm as well, and could probably be classified as a distant cousin to Windmill. More details about that will be announced the week of OSCON, so keep your eyes pealed.

Change can be extremely tough, but it is also very exciting. I want to thank all of my former peers at Rearden for a good experience, and I wish them all the absolute best.


27
Jun 08

Real Estate Data Services

This is my final business review from the high school era, however this one is especially important because it forced me to get my hands dirty with some serious database work and made me write more php boiler plate than I had ever dreamed up until this point. FYI, the person driving this business was a teacher at my high school (his last name goes in the graphic above).. that I never took a class from. He had spent a lot of time working in the appraisal part of the real estate market, and as with any repetitive process — people start to wonder how it could be automated and simplified.

Idea

The booming real estate markets of the late 90′s and early 00′s inspired many (especially those who had been involved in the industry) to start seeing dollar signs. As more people were buying and building homes, more appraisals and inspections were ordered. In case you haven’t been around anyone who does appraisal work, you should know that the research and comparison pieces of the report consume large chunks of time. 

There was a point in time where to get information about lots, land and peoples homes, you would have to physically go to the county assessors office and look through the stock piles of records, plat maps etc. to find you comparable properties in order to base your valuation. All of this information is publicly available and one just needs to go ask to see it. 

It didn’t take long for a few companies to spring up with the idea that they would aggregate all this data, and they did it well enough to make a pretty solid business out of it. However, the distribution of this data via companies like the MLS at that time were on CD’s which you received regularly and had to load onto your computer (I’m sure they still have this as an option) but Mr. Teacher had the idea that it would be much more convenient if people could just access all this data via the Internet. 

Stack

Incase you were wondering about the technology stack we were using to build this, it was as follows:
Apache Web Server, PHP3, MYSQL. Your standard LAMP stack, but before it was your “standard LAMP stack”.

Pitfalls

I must admit, that when I accepted this gig I really had no idea what I was getting into. I made promises that I wasn’t completely confident about, ultimately my lack of experience didn’t turn out to be the killer.

  • For a site like this to succeed we would need many counties worth of data
  • Data needs to be kept up to date (picking up CD’s all over the state every other day is unreasonable)
  • Provided data was not in a reliable format
  • CD’s full of 100 meg comma delimited files are difficult to work with
  • Building a web based competitor to the MLS by yourself when you are 16 is rather daunting

To expand a bit on the above, even after I had a site designed, user logins working, profiles working, and the first round of data for each county searchable I still hadn’t even reached the bulk of the work. At this point my method was to create a PHP script for each file’s particular format and parse through it doing DB inserts. Since the format of each file (even new files for counties I already supported) had changing formats, I was continually updating the scripts trying to make the exploded entries in the arrays match up to the DB columns etc.

Killer

When you are looking to jump into any market, you first need to take a look at the competition. What is going to keep them from squashing you like a bug. Think about it, they have resources, money, people and hopefully some insight into the market. It is much easier for them to create and deploy than it is for you, and they will, and they did.

Not too long after our 4th or 5th iteration of data and some testing, MLS announced their web based service. Around that same time, many smaller (already existing) companies in the real estate market announced that they would be doing the exact same thing.

We could have forged ahead, we had a working rough beta and with some serious persistence we could have built up a small user base by offering lower pricing… but that wasn’t my top concern. I believe that after my involvement tapered down, Mr. Teacher continued forging forward. A moment ago I checked the domain where the beta was available, and it’s no longer even registered. 

Lessons

  • Do your market research
  • If time is an issue, hire a reasonable size team
  • Always get signed contracts (I’m pretty sure he still owes me money)

Get Adobe Flash player