International audienceDistributed computing infrastructures are commonly used for scientific computing, and science gateways provide complete middleware stacks to allow their transparent exploitation by end users. However, administrating such systems manually is time consuming and sub-optimal because of the complexity of the execution conditions. Algorithms and frameworks aiming at automating system administration must deal with online and non-clairvoyant conditions, where most parameters are unknown and evolve over time. We consider the problem of controlling task granularity and fairness among scientific workflows executed in these conditions. We present two self-managing loops monitoring the fineness, coarseness, and fairness of workflow executions, comparing these metrics with thresholds extracted from knowledge acquired in previous executions and planning appropriate actions to maintain these metrics to appropriate ranges. Experiments on the European Grid Infrastructure show that our task granularity control can speed up executions up to a factor of 2 and that our fairness control reduces slowdown variability by 3-7 compared with first-come, first-served. We also study the interaction between granularity control and fairness control: our experiments demonstrate that controlling task granularity degrades fairness but that our fairness control algorithm can compensate this degradation