Search-based advertising allows the advertisers to run special campaigns targeted to different groups of potential consumers at low costs. Google, Yahoo and Microsoft advertising programs allow the advertisers to bid for an ad position on the result page of a user's query when the user searches for a keyword that the advertiser relates to its products or services. The expected revenue generated by the ad depends on the ad position, and the ad positions of the advertisers are concurrently determined after an instantaneous auction based on the bids of the advertisers. The advertisers are charged only when their ads are clicked by the users. To avoid excessive ad expenditures due to sudden surges in the keyword-search activities, each advertiser reserves a fixed finite daily budget, and the ads are not shown in the remainder of the day when the budget is depleted. Arrival times of keyword-search instances, ad positions, ad selections, and sales generated by the ads are random. Therefore, an advertiser faces a dynamic stochastic total net revenue optimization problem subject to a strict budget constraint. Here we formulate and solve this problem using dynamic programming. We show that there is always an optimal dynamic bidding policy. We describe an iterative numerical approximation algorithm that uniformly converges to the optimal solution at an exponential rate of the number of iterations. We illustrate the algorithm on numerical examples. Because dynamic programing calculations of the optimal bidding policies are computationally demanding, we also propose both static and dynamic alternative bidding policies. We numerically compare the performances of optimal and alternative bidding policies by systematically changing each input parameter. The relative percentage total net revenue losses of the alternative bidding policies increases with the budget loading, but were never more than 3.5 % of maximum expected total net revenue. The best alternative to the optimal bidding policy turned out to be a static greedy bidding policy. Finally, statistical estimation of the model parameters is visited. © 2013 Springer Science+Business Media New York