"Don't sit in the same spot twice."
This, the first line in my notebook. This, the first lesson -- suggestion? command? -- of our exclusive new class, "Software Ventures." This, a strange new experience in a university where lectures are de rigueur, comprehension often requires several trips to the TA, and "hands-on learning" is most commonly found in the oft-scorned classrooms of the students of arts and humanities.
So why are we sitting here, nary a notebook in sight, having a relaxed conversation with our new professor (rare!), and starting off the semester by discussing the dynamics of our seating arrangement?
Because software startups, we will find out first-hand, are a dynamic and unstable environment at best, and any portending acolyte of this exhilarating career must be prepared to think up a new idea, new approach, or entirely new set of goals at the drop of a hat.
This is the first taste we students got of our fifteen-week journey into understanding, conceptualizing, and creating our own unique software ventures. Get ready to have a blast (and suffer).
The love-hate relationship I quickly developed with this class notwithstanding, the one clear takeaway from the first minutes of our class was that the lessons would be hugely valuable. That very first day, we hit upon so many crucial points:
Make your business model scalable and repeatable.
Startups are small. Corporations are not. And if you plan to be anything beyond mildly successful with your startup (and really, there is no such thing as "mildly successful" in this field -- it's live or die), then you should be prepared to grow. If your idea isn't flexible enough to expand into a broader and larger environment, your business will burn out before it even begins to saturate the market.
It may seem completely pointless, but from the very beginning, focusing on the future will help you understand the growth of your own business.
Your company should operate as easily with ten employees as with ten thousand -- you never know when a surge in business will require hundreds of new hires to cope with new work, or when a fall in the market will necessitate multiple layoffs. New customers should be added cleanly and accommodated, be they the only new clients this month or merely the first of hundreds in one day. Knowing how to grow is as crucial as knowing your business itself.
"Product Market Fit"
Broad thinking is fantastic. If you envision a future filled with flying cars and regular traffic to Mars, more power to you, but your pitch "Software Interfaces for Piloting Intergalactic Vessels" probably won't raise much capital, no matter how impassioned your presentation.
Think hard about your product. How does it fit into a known segment of the market? Answering this will make sure you're on target, at least when you start out.
Bootstrap to build something cheap, and build it fast.
The old advice "lift yourself up by your own bootstraps" may seem as unhelpful as someone telling you to "man up and get back to work" when you're struggling with depression, but when you're developing a startup, that's exactly what you need to do. Work hard, and work fast; the sooner (and the cheaper) you get a working product, the better you'll be able to get your idea off the ground -- and maybe even have resources left over to, oh, say, run your company.
Find out what your user wants before you start.
I get it, you're smart. You've got a great idea for a new piece of software. Everyone's going to use it. Right? Maybe not. Your targeted users have an uncanny way of despising the very features you think are beautiful. It may be you know people well, and your idea is exactly what people want. But you don't know that until you confirm that, and as usual, better to find out now then after you've sunk thousands of dollars from your investors into your false start.
Find your "Vertical".
Just like the broad-minded approach to your product can get you in trouble, trying to sell to everyone all at once can kill you. Look to find a specific "vertical", a small part of the market which your product fits into well, like shipping specialized items to gourmet restaurants if you've developed a radical new way to efficiently organize delivery networks. Don't lose the big picture, but start small.
These are just some of the big ideas we picked up after the first wonderful, exhausting, frustrating three-hour session of "Software Ventures". It was a great way to kick off the semester, and everyone there -- our professor included -- knew that this was going to be a growing experience like none we'd had before.
Thanks for reading, and please stay tuned for the next segment on lessons learned in software startups. If you have any questions or comments, feel free to contact me (martin at mberlove.com). And remember -- don't sit in the same spot twice!
Content
academia
apt-get
article
bootstrap
bufferedwriter
business
centos
cheatsheet
code
coding
computer science
computers are fast
concurrency
cpu-bound
css
database
error
errors
examples
exercise
expansion
file
fix
functions
growth
hadoop
hadoop_classpath
hadoop_opts
I/O
issues
Java
Julia Evans
lambda
learning
linuz
map-reduce
market
menu
no such file or directory
nosql
operating systems
optimization
oracle
os
output
path
permission denied
problem
programming
programming with nothing
query
quick reference
read
regex
regular expression
relational databases
repeatable
ruby
scalable
scheduling
school
slick
software
software development
software properties
software ventures
solution
speed
SQL
startups
string manipulation
strings
style
tips and tricks
tom stuart
trouble
tutorials
ubuntu
unknownhostexception
vertical
web
web design
web development
wordcount
write
yum
Saturday, May 17, 2014
Wednesday, May 14, 2014
Confirmed: "Computers are Fast" (!)
After reading this post by Julia Evans, which considers CPU speeds somewhat more deeply than its title implies ("Computers are Fast"), a fragment from a recent conversation with one of my computer science professors came to mind.
Simply, and somewhat paraphrased: "almost all processes are rapidly becoming I/O-bound."
Not so long ago, in OS Design class, one homework and several questions on exams tasked us to carefully identify whether a process would be I/O-bound or CPU-bound based on its actions and properties. Would "I/O-bound" have consistently been the correct answer?
Not according to the professor of that class, at least, since I remember a few answers to the contrary. And I'd be willing to wager that there remain enough computationally-intensive tasks that OSs must take CPU-bound costs into consideration when scheduling processes, at least in some areas of work.
But might gains in speed, parallelism, and optimization eventually sway the balance?
My guess is yes -- but only for personal-computing tasks. For example, I've never personally run a highly complex physics particle simulator on a time-slotted supercomputer, but I'd bet most of that isn't too memory-heavy, especially compared to the insane number of calculations required (interesting note about reducing calculation cost).
And imagine how that is for some higher-order function, like prime factorization (well, I guess it's not officially known to be superpolynomial at this point). The time required to compute can be enormous, but space complexity doesn't need to be too bad (they're just integers, after all).
I'm curious to see how things turn out over the next few years. It's an exciting time to be computing! -- and when isn't it?
Simply, and somewhat paraphrased: "almost all processes are rapidly becoming I/O-bound."
Not so long ago, in OS Design class, one homework and several questions on exams tasked us to carefully identify whether a process would be I/O-bound or CPU-bound based on its actions and properties. Would "I/O-bound" have consistently been the correct answer?
Not according to the professor of that class, at least, since I remember a few answers to the contrary. And I'd be willing to wager that there remain enough computationally-intensive tasks that OSs must take CPU-bound costs into consideration when scheduling processes, at least in some areas of work.
But might gains in speed, parallelism, and optimization eventually sway the balance?
My guess is yes -- but only for personal-computing tasks. For example, I've never personally run a highly complex physics particle simulator on a time-slotted supercomputer, but I'd bet most of that isn't too memory-heavy, especially compared to the insane number of calculations required (interesting note about reducing calculation cost).
And imagine how that is for some higher-order function, like prime factorization (well, I guess it's not officially known to be superpolynomial at this point). The time required to compute can be enormous, but space complexity doesn't need to be too bad (they're just integers, after all).
I'm curious to see how things turn out over the next few years. It's an exciting time to be computing! -- and when isn't it?
Monday, May 12, 2014
A Remarkable Exercise in Programming Prowess
A remarkable exercise: reducing a programming language (Ruby, in this case) to its bare minimum while preserving its expressiveness and power, and demonstrating how that functionality can be derived from lambda functions.
Check it out here, at Programming with Nothing.
Could we see the same thing done in other languages? I imagine it is easier in some and much more difficult in others. For instance, Java is an intuitive language, but can its lambdas, added recently, stand up to the test? Most Java developers rely so heavily on imported libraries (understandably) that its hard to imagine getting much done without them.
Has this already been done? Or are repetitions of this experiment just dying to be performed on new languages? It's exciting....
Check it out here, at Programming with Nothing.
Could we see the same thing done in other languages? I imagine it is easier in some and much more difficult in others. For instance, Java is an intuitive language, but can its lambdas, added recently, stand up to the test? Most Java developers rely so heavily on imported libraries (understandably) that its hard to imagine getting much done without them.
Has this already been done? Or are repetitions of this experiment just dying to be performed on new languages? It's exciting....
Subscribe to:
Posts (Atom)